Unlock instant, AI-driven research and patent intelligence for your innovation.

Graph convolutional network software and hardware collaborative acceleration method based on in-memory calculation

A software-hardware synergy, convolutional network technology, applied in biological neural network models, physical implementation, neural architecture, etc., to reduce the amount of computation and storage, improve parallelism, and improve throughput.

Pending Publication Date: 2022-07-05
TSINGHUA UNIV
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0016] Therefore, the purpose of the present invention is to design a GCN deployment algorithm for multi-core in-memory computing hardware at the deployment algorithm level, solve the problem of how to map irregular graph data to a hardware array of a certain scale, and simultaneously improve the calculation parallelism. At the hardware level, to solve the problem of how to design in-memory computing units and data streams, a method for synergistic acceleration of software and hardware in graph convolutional networks based on in-memory computing is proposed.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Graph convolutional network software and hardware collaborative acceleration method based on in-memory calculation
  • Graph convolutional network software and hardware collaborative acceleration method based on in-memory calculation
  • Graph convolutional network software and hardware collaborative acceleration method based on in-memory calculation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0060] It should be noted that the embodiments in the present application and the features of the embodiments may be combined with each other in the case of no conflict. The present invention will be described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.

[0061] In order to make those skilled in the art better understand the solutions of the present invention, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only Embodiments are part of the present invention, but not all embodiments. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.

[0062] The following describes the...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a graph convolutional network software and hardware collaborative acceleration method based on in-memory calculation, and the method comprises the steps: converting a high-bit-width floating-point number of graph data into a low-bit-width fixed-point number through a quantitative fixed-point mode in the aspect of software, segmenting an original graph of the graph data into a plurality of sub-graphs, and obtaining a clustering result; node features of the multiple sub-graphs are mapped to an RRAM cross array used for feature aggregation, and then edges connected with the different sub-graphs in a clustering result are deleted in an edge deletion mode so as to deploy graph data to hardware; in the aspect of hardware, a calculation module and a control module are configured, the calculation module comprises an aggregation core array, a vector combination array and an intermediate cache, and the control module comprises an instruction queue, a graph data decoder and a neighbor cache; through collaborative design of software and hardware, acceleration of GCN (Graph Convolutional Network) calculation is realized. According to the method, the calculation parallelism degree, the throughput rate of the architecture and the resource utilization rate of hardware are improved.

Description

technical field [0001] The invention relates to the field of in-memory computing, in particular to a method and device for co-accelerating software and hardware of a graph convolution network based on in-memory computing. Background technique [0002] A graph structure is a commonly used data structure consisting of numerous nodes and edges connecting them. Structures commonly used in daily and research, such as transportation networks, social networks, and molecular structures, can be abstractly represented by graph structures. In recent years, graph neural network models (GNNs) have demonstrated strong learning capabilities on graph-based tasks, such as recommender systems, document classification, etc. The graph convolutional network model (GCN) has become the most commonly used GNN model today due to its small computational cost and excellent accuracy performance. like figure 1 As shown, each node in the graph task will have a feature vector, which contains the inform...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06N3/063G06N3/04
CPCG06N3/063G06N3/045
Inventor 汪玉朱昱朱振华戴国浩杨华中
Owner TSINGHUA UNIV