Hardware accelerator, system and method for accelerating graph neural network attribute access

A hardware accelerator and neural network technology, applied in neural learning methods, biological neural network models, neural architectures, etc., can solve the problems that hinder the widespread use of graph neural networks and expand large graphs, so as to improve memory access efficiency, improve overall performance, Avoid the effects of external data extraction

Pending Publication Date: 2022-02-01
平头哥上海半导体技术有限公司
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

One of the challenges that has so far hindered the widespread adoption of graph neural networks in industrial applications is the difficulty of scaling them to large graphs

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Hardware accelerator, system and method for accelerating graph neural network attribute access
  • Hardware accelerator, system and method for accelerating graph neural network attribute access
  • Hardware accelerator, system and method for accelerating graph neural network attribute access

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0031] This disclosure is intended to enable those skilled in the art to make and use the embodiments, and is presented in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present disclosure. Thus, the present disclosure is not limited to the illustrated embodiments, but is to be accorded the widest scope consistent with the principles and characteristics of the present disclosure.

[0032] Data can be structured or unstructured. With structured data, information can be arranged according to a pre-set data model or profile. With unstructured data, there is no need to use a preset data model or a predefined way to arrange information. For example, text files (eg, emails, reports, etc.) may include information (eg, in...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The present disclosure provides a hardware accelerator, a system and a method for accelerating graph neural network attribute access. The hardware accelerator includes a graph neural network attribute processor and a first memory, the graph neural network attribute processor configured in the following way: receiving a first graph node identifier; determining a target memory address within the first memory based on the first graph node identifier; determining whether attribute data corresponding to the received first graph node identifier is cached in a target memory address of a first memory based on the received first graph node identifier; and in response to determining that the attribute data is not cached in the first memory, acquiring the attribute data from the second memory, and writing the acquired attribute data into the target memory address of the first memory. The present disclosure improves memory access efficiency for obtaining graph node attribute data, and thus improves overall performance of graph neural network computations.

Description

technical field [0001] The present disclosure generally relates to accelerating graph neural network (graph neural network, GNN) computation, and in particular, to a hardware accelerator, computer system and method for accelerating graph neural network computation using a multi-level attribute cache mechanism. Background technique [0002] Although traditional deep learning models are good at pattern recognition and data mining by capturing the hidden patterns of European-style data (such as images, texts, videos, etc.), graph neural networks (GNNs) have proven to extend the capabilities of machine learning to Non-European domain of graph data representations with complex relationships and interdependencies between objects. Studies have shown that graph neural networks can surpass the current state-of-the-art in applications ranging from molecular inference to community detection. One of the challenges that has so far hindered the widespread adoption of graph neural network...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/063G06N3/04G06N3/08
CPCG06N3/063G06N3/08G06N3/045G06F2212/454G06F12/0897G06F12/0895G06F12/0811G06F2212/1016G06F12/128G06F12/121G06F2212/604
Inventor 关天婵刘恒李双辰郑宏忠
Owner 平头哥上海半导体技术有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products