Deep neural network acceleration platform based on FPGA

A technology of deep neural network and neural network, which is applied in the design field of FPGA-based deep neural network acceleration platform, can solve problems such as long development cycle and achieve good performance

Active Publication Date: 2018-06-29
苏州中科瀚海高工科技有限公司
View PDF3 Cites 40 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0008] Based on the analysis of the above acceleration platforms, it can be seen that FPGA is an attractive choice between the efficiency of ASIC and the programmability of general-purpose processors, but the development of FPGA requires certain hardware design experience and requires a long development process. cycle, and because of this, FPGAs shut out many software programmers

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Deep neural network acceleration platform based on FPGA
  • Deep neural network acceleration platform based on FPGA
  • Deep neural network acceleration platform based on FPGA

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0051] The deep neural network acceleration platform in the embodiment of the present invention includes a general-purpose processor, a field programmable gate array, and a storage module, wherein the data path between the FPGA and the general-purpose processor can use the PCI-E bus protocol, the AXI bus protocol, and the like. The data path in the drawings of the embodiments of the present invention is illustrated by using the AXI bus protocol as an example, but the present invention is not limited thereto.

[0052] figure 1 It is a design flowchart of the acceleration system platform of the embodiment of the present invention, and the steps included are as follows:

[0053] The general processor is used to analyze the neural network configuration information and weight data, and write the neural network configuration information and weight data into DRAM;

[0054] FPGA reads configuration information from DRAM to generate FPGA accelerator;

[0055] The general processor re...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The present invention discloses a deep neural network acceleration platform based on an FPGA. The platform comprises a general-purpose processor, an FPGA and a DRAM. The general-purpose processor is configured to analyze neural network configuration information and weight data and write the neural network configuration information and the weight data into the DRAM, the FPGA reads the configurationinformation from the DRAM to generate an FPGA accelerator, the general-purpose processor is configured to read in image information and write the image information into the DRAM, the FPGA acceleratorreads the image data from the DRAM and starts to calculate and writes a calculation result into the DRAM, and finally, the general-purpose processor reads a classification result from the DRAM. The accelerator allows each layer to be deployed on a FPGA chip at the same time, allows each layer to operate in a streamline mode, allows programmers having no hardware knowledge to utilize existing FPGAresources to easily obtain good performances.

Description

technical field [0001] The invention relates to an algorithm hardware acceleration platform, in particular to a design method of an FPGA-based deep neural network acceleration platform with good versatility and high flexibility. Background technique [0002] Neural network belongs to the connectionist school in the field of artificial intelligence, and is a mathematical model that uses a structure similar to the brain's synaptic connections for information processing. In the 1950s, the first generation of neural networks - the perceptron was born, which can realize linear classification, associative memory, etc.; in the 1980s, the multi-layer perceptron and its training algorithm - back propagation algorithm (back propagation, BP) , has been widely studied and applied because it can solve linearly inseparable problems. However, at that time, the relatively low hardware computing power and the network training algorithm easily fell into local minima and other problems, which...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/063
CPCG06N3/063
Inventor 李曦周学海王超陈香兰
Owner 苏州中科瀚海高工科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products