Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

BP neural network parallelization method for multi-core computing environment

A BP neural network and computing environment technology, applied in the field of BP neural network parallelization, can solve problems such as excessive synchronization and limited efficiency improvement, and achieve the effects of reducing cache replacement, improving parallelism, and giving full play to hardware performance

Inactive Publication Date: 2017-06-30
SOUTH CHINA UNIV OF TECH +1
View PDF0 Cites 23 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Horizontal structure division is easy to balance the load, but there are too many synchronizations, so the efficiency improvement is limited

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • BP neural network parallelization method for multi-core computing environment
  • BP neural network parallelization method for multi-core computing environment
  • BP neural network parallelization method for multi-core computing environment

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0061] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and specific embodiments.

[0062] Assuming that the number of available computing units is P, the size of the second-level shared cache of the processor is C bytes, the input layer is the 0th layer, the input vector dimension is N, and the BP neural network has at most H hidden layers, of which the i-th hidden layer has T i neurons, the output layer is H+1 layer, with T H+1 neurons, the sample set size is Q, and the maximum number of training times is M(P, N, H, i, T i , T H+1 , Q, M are all normal numbers greater than 1).

[0063] For a hardware platform with P computing units, BP neural network training is divided into P initial tasks and P training tasks. Each initialization task includes:

[0064] Subtask 101: sample initialization processing subtask, encode...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a BP neural network parallelization method oriented to a multi-core computing environment. Including: parallel calculation task division and mapping method; cache setting method for storing intermediate calculation results of each layer; parallel training method of BP neural network. For the hardware platform containing P computing units, the task division method combines the hidden layer and output layer computing tasks into a larger granularity task as a whole, so as to improve parallelism; the cache setting method, in a training process, The same variable can be used in the next subtask after being accessed by the previous subtask, and the next subtask will not cause Cache loss; in the BP neural network training method, the samples are divided into K batches to enter the network training, and the K value is designed on a comprehensive computing platform The size of the secondary cache, combined with the cache settings, maximizes hardware performance and is suitable for BP neural network applications under multi-core computing platforms.

Description

technical field [0001] The invention relates to the fields of BP neural network and high-performance computing, in particular to a BP neural network parallelization method oriented to a multi-core computing environment. Background technique [0002] Artificial Neural Network (ANN) is abstracted from the human brain neuron network, and completes information analysis and processing by abstracting and simulating the characteristics of natural neural networks. BP neural network algorithm is one of the most widely used neural network models at present, and has been successfully applied to research in the fields of information, biology and medicine. When the neural network algorithm is faced with a huge data set or a complex network structure, the network training time is long and the training speed is slow, which leads to a decrease in the overall efficiency of network computing and hinders the application and development of BP neural network. [0003] After the hardware enters ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06N3/08G06F9/50G06F12/0893G06F12/0897
CPCG06N3/084G06F9/5027G06F12/0893G06F12/0897
Inventor 汤德佑梁珩琳曾庆淼张亚卓汤达祺邹春刚
Owner SOUTH CHINA UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products