Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

A parallelization-based brain-like simulation compilation acceleration method

A technology of neurons and groups, applied in the field of neural network simulation, can solve problems such as poor user experience, long time consumption, and large time consumption, and achieve good user experience, high efficiency, and fast speed

Pending Publication Date: 2021-04-13
SUN YAT SEN UNIV
View PDF1 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

For very large-scale neural networks, serial compilation in the CPU will consume a lot of time, thereby reducing the experience of using the simulation framework
Chinese patent CN110908667A discloses a method, device and electronic equipment for neural network joint compilation, in which serial compilation is adopted, which takes a long time and has poor user experience

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A parallelization-based brain-like simulation compilation acceleration method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0034] A method for accelerating brain-inspired simulation compilation based on parallelization, comprising the following steps:

[0035] S1. When constructing a neural network, create several groups, each containing millions of neurons;

[0036] S2. Constructing neuron arrays in parallel according to neuron groups;

[0037] S3. Construct synapse arrays and the mapping relationship between neurons and synapse arrays in parallel according to the connections between groups.

[0038] At present, many mainstream brain-inspired simulation frameworks allow users to input neuron data in the form of neuron groups when users input neural network data, and allow users to establish synapses between neuron groups in a way specified by users. Connection, the so-called group refers to a group of neuron nodes with the same model and the same attributes. When constructing a neural network, several groups are created, and each group contains millions of neurons. In step S2, because all Neuro...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to the technical field of neural network simulation, in particular to a parallelization-based brain-like simulation compilation acceleration method, which comprises the following steps: S1, when a neural network is constructed, creating a plurality of ethnic groups, each ethnic group comprising millions of neurons; s2, constructing a neuron array in parallel according to the neuron group; and S3, constructing a synaptic array and a mapping relationship from neurons to the synaptic array in parallel according to the connection between the ethnic groups. According to the a parallelization-based brain-like simulation compilation acceleration method, the speed of the simulation framework is increased through a parallel algorithm, and the waiting time of a user is greatly shortened.

Description

technical field [0001] The invention relates to the technical field of neural network simulation, in particular to an acceleration method based on parallelization of brain-inspired simulation compilation. Background technique [0002] Before using the GPU to simulate the spiking neural network, the neural network topology data input by the user needs to be compiled into a data structure suitable for parallel simulation in the GPU. For a very large-scale neural network, serial compilation in the CPU will consume a lot of time, thereby reducing the experience of using the simulation framework. Chinese patent CN110908667A discloses a method, device and electronic equipment for joint compilation of neural networks, in which the compilation is performed in a serial manner, which takes a long time and has poor user experience. Contents of the invention [0003] In order to overcome the defects in the prior art, the present invention provides an acceleration method based on para...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06N3/10G06F8/41
CPCG06N3/10G06F8/41Y02D10/00Y02A90/10
Inventor 黄凯王弘远陈刚
Owner SUN YAT SEN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products