Supercharge Your Innovation With Domain-Expert AI Agents!

Concurrent graph data preprocessing method based on FPGA

A graph data and preprocessing technology, which is applied in the field of embedded system data processing, can solve problems such as performance bottlenecks, large performance differences, and inability to apply concurrent graph processing, etc., to improve memory access hit rate, reduce overhead, and improve concurrent graph calculations efficiency effect

Pending Publication Date: 2020-08-18
SHANGHAI JIAO TONG UNIV
View PDF6 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Existing solutions can consider solutions from the aspects of single graph computing efficiency, designing graph data structures that are easy to add, delete, modify, and query, and optimizing concurrent scheduling resource sharing, but ignore the performance difference of a graph algorithm when processing different graph data Large and the same kind of graph data will also encounter performance bottlenecks when using multiple graph algorithms to process. In practical applications, a single data structure is often not suitable for most of the concurrent graph processing problems encountered.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Concurrent graph data preprocessing method based on FPGA
  • Concurrent graph data preprocessing method based on FPGA
  • Concurrent graph data preprocessing method based on FPGA

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0029] Such as figure 1 As shown, it is a FPGA-based concurrent graph data preprocessing method involved in this embodiment, including the following steps:

[0030] Step 1) According to the data information D, the strategy identifies the source data format, which defaults to a triplet (V s ,V d ,weight), and calculate the relevant characteristic parameters of the graph data information D, including the number of graph data nodes V i , side number E i ,density

[0031] Step 2) According to the feature information of the graph data in step 1), an alternative data format is estimated, and the data format includes matrix, adjacency list, Tree, linked list, CSR or CSC format.

[0032] The estimated alternative specific steps include:

[0033] 2.1) Calculate the relevant characteristic parameters of the graph data information D, including the number of graph data nodes V i , side number E i ,density

[0034] 2.2) The calculated density ρ i and with the preset ρ 0 Compa...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A concurrent graph data preprocessing method based on an FPGA comprises the steps that source graph data blocks and graph data characteristics are extracted from a graph data format information inputset and an algorithm information input set, a graph data and graph algorithm combination matrix is generated, and graph data format pre-conversion is conducted; according to the power law of the graphdata, a process allocation mode is determined through streaming drive scheduling, and then matched data partitioning and parallel design are performed according to attribute parameters of an FPGA accelerator in a heterogeneous platform; therefore, the implementation process of data preprocessing and scheduling optimization of the whole parallel graph processing process on the FPGA is realized. Bycombining the concurrent scheduling strategy of the GPU and FPGA acceleration platforms, the resource utilization rate and the overall performance are remarkably improved through overall preprocessing and scheduling optimization after the optimal data format is selected, and the high efficiency of the graph calculation process is guaranteed.

Description

technical field [0001] The invention relates to a technology in the field of embedded system data processing, in particular to an FPGA-based concurrent graph data preprocessing method. Background technique [0002] In the environment of large-scale graph computing, the processing of concurrent graph query and graph analysis often has the problem of high latency caused by the data structure not being suitable for the current algorithm. Existing solutions can consider solutions from the aspects of single graph computing efficiency, designing graph data structures that are easy to add, delete, modify, and query, and optimizing concurrent scheduling resource sharing, but ignore the performance difference of a graph algorithm when processing different graph data Large and the same kind of graph data will also encounter performance bottlenecks when processed by multiple graph algorithms. In practical applications, a single data structure is often not suitable for most of the concu...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T1/20
CPCG06T1/20
Inventor 李超王靖王鹏宇朱浩瑾过敏意
Owner SHANGHAI JIAO TONG UNIV
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More