Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Compression method of convolutional neural network and implementation circuit thereof

A convolutional neural network and compression method technology, applied in the field of deep learning accelerator design, can solve the problems of processing speed impact, model irregularity, and low efficiency of neural network parallel computing

Pending Publication Date: 2020-10-27
NANJING UNIV OF AERONAUTICS & ASTRONAUTICS
View PDF0 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, a large number of studies have shown that due to the model irregularity caused by the pruning algorithm, the parallel computing efficiency of the neural network is low, which has a serious impact on its processing speed.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Compression method of convolutional neural network and implementation circuit thereof
  • Compression method of convolutional neural network and implementation circuit thereof
  • Compression method of convolutional neural network and implementation circuit thereof

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0038] The present invention will be further described below in conjunction with the accompanying drawings and specific embodiments.

[0039] In a convolutional neural network, different layers have different characteristics. The front layer needs to process large-size feature maps, which requires a large amount of calculation, but has less weight; the size of the feature map processed by the latter layer is reduced due to the pooling layer, which requires a small amount of calculation, but has Lots of weights.

[0040] This embodiment proposes a compression method based on the characteristics of the convolutional neural network, and the specific steps are:

[0041] (1) Divide the convolutional neural network into non-pruning layers and pruning layers;

[0042] (2) Set the pruning threshold, prune the weights in the convolutional neural network that are less than the pruning threshold, then retrain the convolutional neural network, update the weights that have not been cut, ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a compression method of a convolutional neural network and an implementation circuit thereof. The method comprises the following steps: (1) dividing the convolutional neural network into a non-pruning layer and a pruning layer; (2) pruning the whole convolutional neural network, and then retraining to obtain a high-precision sparse network; (3) removing the weight mask of the non-pruning layer; (4) progressively quantifying the pruning layer; and (5) keeping the weight of the pruning layer unchanged, and performing linear quantization on the non-pruning layer to obtain acompressed convolutional neural network. According to the compression method, the convolutional neural network model can be greatly compressed under the condition of ensuring high processing performance. Aiming at the compression method, the invention further provides an implementation circuit of the convolutional neural network, the implementation circuit comprises a distributed non-pruning layer hardware processing circuit and a pruning layer hardware processing circuit, the two circuits jointly implement the convolutional neural network in an assembly line mode so as to form an assembly line processing mode, and the processing performance is greatly improved.

Description

technical field [0001] The invention relates to the field of deep learning accelerator design, in particular to a convolutional neural network compression method and an implementation circuit thereof. Background technique [0002] With the rapid development of convolutional neural networks in target detection and recognition, it is possible to greatly improve the accuracy of image recognition and detection. However, in order to achieve better target detection and recognition performance, the convolutional neural network is continuously deepened, which brings about a rapid increase in the amount of calculation and expansion of the model size. Therefore, convolutional neural networks need parallel devices to accelerate them, save training time or meet the requirements of real-time object detection, such as high-power GPU devices. In order to realize the deployment of deep convolutional neural networks on low-power embedded devices, FPGA-based convolutional neural network acce...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/04G06N3/063G06N3/08
CPCG06N3/063G06N3/082G06N3/045Y02D10/00
Inventor 刘伟强袁田王成华
Owner NANJING UNIV OF AERONAUTICS & ASTRONAUTICS
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products