Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Graph neural network application-oriented task scheduling execution system and method

A neural network and task scheduling technology, applied in the field of graph neural network applications, can solve problems such as low utilization of computing resources and inability to run graph neural networks efficiently, achieve efficient and fast execution, improve computing resource utilization, and improve utilization efficiency effect on performance

Active Publication Date: 2020-09-22
INST OF COMPUTING TECH CHINESE ACAD OF SCI
View PDF7 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The hybrid computing mode reflected in the aggregation and combination stages makes it impossible for current conventional general-purpose processors, graph computing or neural network-specific accelerators to run graph neural networks efficiently.
In addition, if a dedicated acceleration structure is simply built for the two stages, problems such as low utilization of computing resources will arise.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Graph neural network application-oriented task scheduling execution system and method
  • Graph neural network application-oriented task scheduling execution system and method
  • Graph neural network application-oriented task scheduling execution system and method

Examples

Experimental program
Comparison scheme
Effect test

example 1

[0053] Example 1. The encoding methods corresponding to different types of graph operators in PE and the corresponding binary operation tree

[0054] The decoder in the PE performs input configuration and execution configuration on the execution frame of the graph operator according to the different encodings of the graph operator, and specifically sets the execution frame as an 8-input accumulation, maximum value, minimum value or multiply-add binary operation tree. Graph operators are coded as 2 bits, and 00, 01, 10, and 11 are interpreted as accumulation, maximum, minimum, and multiply-accumulate operations, respectively.

[0055] Codes 00 and 11 respectively correspond to 8-input accumulation binary operation tree and multiply-add binary operation tree as attached image 3 shown.

example 2

[0056] Example 2.32 Input accumulation operation mapping process

[0057] attached Figure 4 Mapping diagram for a 32-input accumulative computation graph. In the initialization phase, the scheduler divides the accumulative calculation graph according to the number of input operands of the original calculation graph, which is 32, and maps it into four graph operators with 8 valid inputs and an additional graph operator with 4 valid inputs. . Each graph operator corresponds to a dedicated label generated by the label generator TgGen, marking its node ID (VID), round ID (RID), graph operation ID (GID) in the current round and the output number of the current round (ONum) and remaining repetitions (RRT) information.

example 3

[0058] Example 3. Task scheduling and execution process

[0059] In this device, the scheduler completes the task scheduling work by scheduling the execution of the graph operator, the processing unit PE completes the execution work of the graph operator, and the graph operator cache module to be transmitted stores the pending graph operator processing request, graph operator Labels and operands, the emission unit fills the input operation for the graph operator, that is, if the number of inputs of the graph operator is less than 8, it fills up 8, which is called infinity padding in the minimum comparison operation, and in the maximum comparison operation Padded by infinitesimals, and 0-padded in multiply-add operations. The emit unit then emits the graph operator to process the request to the PE. attached Figure 5 The process of task scheduling and execution in the present invention is shown, and the specific steps are described as follows:

[0060] Step 501: The graph op...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a graph neural network application-oriented task scheduling execution system and method, and the method comprises the steps that a to-be-transmitted graph operator cache modulereads a to-be-processed graph operator processing request, and obtains needed input data from a cache according to a graph operator label of the to-be-processed graph operator processing request; thetransmitting unit transmits the to-be-processed graph operator processing request and the input data to a unified structure processing unit based on the static data stream; the unified structure processing unit maps the input data to the input of the corresponding binary operation tree according to the graph operator code of the graph operator label and the graph operator execution frame, and completes the current round of operation to obtain an intermediate result; the label generator generates a new graph operator label according to the label information of the previous round of operation; the unified structure processing unit returns the intermediate result and the new graph operator label to the to-be-transmitted graph operator cache module; and circularly execution is performed untilthe residual repetition frequency value in the graph operator label is 1, and the current intermediate result is written back to the cache.

Description

technical field [0001] The invention relates to the field of graph neural network applications, in particular to a task scheduling execution system and method for graph neural network applications. Background technique [0002] Convolutional neural networks are often used to solve problems such as computer vision, natural language processing, and speech analysis. However, they are usually only applicable to data spaces with Euclidean structure or grid structure, and their application range is limited. In recent years, the research on non-Euclidean graph structure data has been on the rise, and the data in the graph structure can express more complex relationships between elements on a larger scale. Graph convolutional neural networks (GCNs) perform graph convolution on graph-structured data, which has a more powerful ability to express information, so it has received great attention in academia and industry. Graph convolutional neural networks are currently widely used in n...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F9/48G06N3/04
CPCG06F9/4881G06N3/045Y02D10/00
Inventor 严明玉李涵叶笑春曹华伟范东睿
Owner INST OF COMPUTING TECH CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products