Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Convolutional neural network acceleration processing system and method based on FPGA, and terminal

A convolutional neural network and processing method technology, applied in the field of convolutional neural network accelerated processing systems, can solve problems such as large differences in parallelism, mismatch between computing characteristics and deployed on-chip network architecture, and different memory access characteristics, to achieve improved The effect of accelerating efficiency

Active Publication Date: 2020-08-25
SHANGHAI ADVANCED RES INST CHINESE ACADEMY OF SCI
View PDF5 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] In view of the shortcomings of the prior art described above, the purpose of this application is to provide an FPGA-based convolutional neural network acceleration processing system, method, and terminal to solve the inherent computing characteristics and deployment of convolutional neural networks in the prior art. In the case of on-chip network architecture mismatch, there is still a lot of room for improvement in hardware acceleration efficiency, and there are problems such as large differences in the degree of parallelism in different dimensions of the network layer, and different memory access characteristics of different network layers.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Convolutional neural network acceleration processing system and method based on FPGA, and terminal
  • Convolutional neural network acceleration processing system and method based on FPGA, and terminal
  • Convolutional neural network acceleration processing system and method based on FPGA, and terminal

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0068] Embodiment 1: a kind of FPGA-based convolutional neural network accelerated processing system, please refer to Figure 7 .

[0069] Based on the pipeline architecture, the system includes:

[0070] It consists of off-chip DDR memory, chip-swap memory interface, direct memory access controller, convolution calculation core engine, input feature on-chip cache unit, weight on-chip cache unit, intermediate value on-chip cache unit, and pipeline controller unit. in Figure 7 The solid arrows in the figure are data paths, and the dotted arrows are control paths.

[0071] The off-chip DDR memory sends off-chip input data to the chip-changing memory interface to realize data transmission with the chip; the input feature on-chip buffer unit is used to read the convolution on the off-chip input data The input feature map data of the neural network; the weight on-chip cache unit, connected to the input feature on-chip cache unit, for reading the weight data corresponding to the...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a convolutional neural network acceleration processing system and method based on an FPGA, and a terminal. The problems that in the prior art, the inherent computing characteristics of a convolutional neural network are mismatched with a network-on-chip architecture, the hardware acceleration efficiency is low, the parallelism difference of a network layer in different dimensions is large, and the memory access characteristics of different network layers are different are solved. The acceleration efficiency, the data throughput rate and the calculation energy efficiencyof the convolutional neural network on the FPGA are improved through an assembly line architecture, a customized multistage memory access strategy and convolutional parallel optimization.

Description

technical field [0001] The present application relates to the technical field of artificial intelligence, in particular to an FPGA-based convolutional neural network accelerated processing system, method and terminal. Background technique [0002] In recent years, with the continuous development of artificial intelligence technology and the explosive growth of data volume, deep learning technology represented by convolutional neural network (CNN) has been widely used in human-like visual analysis (target detection, classification, tracking), etc. Brain cognitive tasks, and achieved remarkable results. Due to the increasing complexity of application scenarios, the structure of network models is becoming more and more complex and the depth is also increasing, which brings very serious challenges to the real-time processing of general computing platforms. [0003] Due to the demanding performance and energy efficiency requirements of embedded platforms, the deployment of convo...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/063G06F7/544G06T1/20
CPCG06N3/063G06F7/5443G06T1/20Y02D10/00
Inventor 汪辉夏铭刘天洋田犁黄尊恺祝永新封松林
Owner SHANGHAI ADVANCED RES INST CHINESE ACADEMY OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products