Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Heterogeneous neural network calculation accelerator design method based on FPGA

A neural network and design method technology, applied in the computer field, can solve problems such as reduced efficiency, achieve high-efficiency utilization, save computing time, and require low computing power

Active Publication Date: 2020-04-10
UNIV OF ELECTRONICS SCI & TECH OF CHINA
View PDF9 Cites 12 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, due to the reuse of the convolutional neural network computing module, this method needs to reload the required parameters from the memory before computing each layer, which undoubtedly takes a lot of time for parameter loading, which greatly improves the efficiency of the entire network calculation. reduce

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Heterogeneous neural network calculation accelerator design method based on FPGA
  • Heterogeneous neural network calculation accelerator design method based on FPGA
  • Heterogeneous neural network calculation accelerator design method based on FPGA

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0029] Below in conjunction with specific accompanying drawing and embodiment the present invention is described in further detail:

[0030] In this embodiment, based on the deep learning computing platform design method, its overall structure is as follows figure 1 As shown, it specifically includes the following:

[0031] The CPU accesses the external memory and FPGA through the bus, and the external memory is used to store the relevant parameters of the neural network, as well as input data and output results; the CPU controls the scheduling control unit located on the FPGA through the bus to perform data access and the work of the convolution calculation unit ; The convolution calculation unit performs orderly parameter loading, convolution calculation and final result output under the scheduling of the scheduling controller.

[0032] image 3 It is a data calculation flowchart of the present invention, and the implementation details specifically include the following st...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention belongs to the technical field of computers. The invention provides an FPGA-based heterogeneous neural network calculation accelerator design method, which is suitable for large-scale deep neural network algorithm acceleration. The method comprises the following steps that: a CPU reads related parameters of a neural network, and dynamically configures an external memory and a convolution calculation unit according to obtained information; an external memory stores parameters and input data which need to be loaded into corresponding positions of an input cache through a bus; the parameters are alternately loaded into the two convolution calculation units respectively, the other convolution calculation unit performs calculation while the parameters are loaded into one convolution calculation unit, and circularly alternating is conducted until all operations of the whole convolution neural network are completed; and final output results are stored in an output cache to waitfor an external memory to access. According to the method, the FPGA is used for combining the convolutional neural network calculation units, so a computing rate of a platform can be increased while resources are saved.

Description

technical field [0001] The invention belongs to the technical field of computers, and in particular relates to a design method of an FPGA-based heterogeneous neural network computing accelerator. Background technique [0002] Deep learning is an important area of ​​artificial intelligence, mainly used to study the algorithm, theory and application of neural networks. Since Hinton et al. proposed the concept of deep learning in 2006, it has made extraordinary achievements in natural language processing, image processing, speech processing and many other areas, and has received great attention. Although it has powerful data analysis and prediction capabilities, deep learning still faces the problem of a large amount of calculation, so the construction of an efficient deep learning platform is becoming more and more important. [0003] FPGA (Field Programmable Gate Arrays), Field Programmable Gate Array, is the product of further development on the basis of PAL, GAL, CPLD and ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/063
CPCG06N3/063Y02D10/00
Inventor 李培睿阮爱武杜鹏
Owner UNIV OF ELECTRONICS SCI & TECH OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products