Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Design method of lightweight convolution accelerator based on FPGA

An accelerator and convolution technology, applied in neural learning methods, neural architectures, biological neural network models, etc., can solve problems such as huge power consumption, high power consumption, and large size, and achieve the effect of improving computing power

Pending Publication Date: 2021-07-13
UNIV OF JINAN
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the shortcomings of GPU are also very obvious. Although it has high performance, the power consumption is also very huge.
Especially when using the GPU to train the convolutional neural network on the PC side, the power consumption can be as high as hundreds of watts
Due to the shortcomings of large size and high power consumption of GPU itself, its promotion and application on mobile terminals and embedded platforms with small size and low power consumption are limited.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Design method of lightweight convolution accelerator based on FPGA
  • Design method of lightweight convolution accelerator based on FPGA
  • Design method of lightweight convolution accelerator based on FPGA

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0018] The present invention provides a method for convolution acceleration based on FPGA. First, the existing data is pre-trained. If the existing network parameters exceed the storage limit, the existing data is sparsely compressed by means of parameter compression, and the compressed data is encoded for index calling. Special design and optimization for different convolutional layers.

[0019] It can be more convenient to pass the transmission and operation when the neural network is operated. The specific implementation steps are as follows:

[0020] Step 01. Model initialization, use a general-purpose processor to parse the neural network configuration information and weight data, and write them into the cache RAM. After the model is initialized, normalization is performed, and the ownership value obeys the normal distribution within the range of 0 to 1;

[0021] Step 02. In step 01, for the problem of external storage access restrictions, based on space exploration, de...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a convolution acceleration method based on an FPGA. Firstly, a universal processor is used for analyzing neural network configuration information and weight data, writing the neural network configuration information and the weight data into a RAM, and aiming at external memory access bandwidth limitation and based on design space exploration, determining a cyclic partitioning factor to maximize data reuse, so that the operation performance of the whole network is improved. The FPGA reads configuration information from the RAM to generate an FPGA accelerator, the general processor reads picture information and writes the picture information into the DRAM, and the FPGA accelerator reads picture data from the DRAM, starts calculation and writes a calculation result into the DRAM. According to the accelerator, all the layers can be deployed on an FPGA chip at the same time and run in an assembly line mode, and the operation performance and the data throughput rate are improved.

Description

technical field [0001] The invention relates to the field of convolution acceleration, in particular to a design method based on FPGA lightweight convolution accelerator. Background technique [0002] With the introduction of artificial intelligence (AI) into more and more applications such as consumer electronics, automotive electronics, and industrial control, artificial intelligence is facing unprecedented rapid development, and technologies such as deep learning and neural networks have ushered in a development climax. The larger the neural network, the greater the amount of computation required. Although the traditional VPU can also complete artificial intelligence operations, it is already slightly exhausted due to high power consumption and high latency. Loading artificial intelligence computing power on the VPU can avoid these problems and have higher reliability. Target applications include image capture in in-vehicle systems, assisted driving and automatic parking...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06N3/04G06N3/08
CPCG06N3/082G06N3/045
Inventor 臧阳阳张菁张天驰
Owner UNIV OF JINAN
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products