Check patentability & draft patents in minutes with Patsnap Eureka AI!

Convolutional neural network model compression method and device

A convolutional neural network and compression method technology, applied in biological neural network models, neural learning methods, neural architectures, etc., can solve problems such as limited scalability, large loss of compression accuracy, low pruning efficiency, etc. Huge number and calculation amount, improve precision performance, and ensure the effect of compression rate

Pending Publication Date: 2022-01-14
SHANGHAI JIAO TONG UNIV +1
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] However, the above methods have their own shortcomings: the above pruning algorithms usually do not perform sparse training or perform sparse training with fixed regularization term constraints, which cannot fully exploit the redundancy in the convolutional neural network, resulting in low pruning efficiency and compression. The loss of precision is large; the quantization method reduces the bit width of the convolutional neural network weight storage, but does not reduce the calculation amount of the convolutional neural network; the method of weight decomposition changes the weight structure of the network, and it is difficult to combine with other compression methods to form a combination Compression strategy, limited scalability

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Convolutional neural network model compression method and device
  • Convolutional neural network model compression method and device
  • Convolutional neural network model compression method and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0088] In this embodiment, a compression method of a convolutional neural network model includes the following steps:

[0089] Step 1. Use dynamic regularization constraints on the importance factor of each convolutional channel of the original convolutional neural network model, and perform sparse training, so that the effective weight of the model is concentrated in the channel with a larger importance factor as much as possible, and sparseness is obtained. The sparse convolutional neural network model of ;

[0090] In this embodiment, before step 1 is performed, the original convolutional neural network model may be acquired based on target detection algorithm training. In particular, the present invention does not limit the target detection algorithm for obtaining the original convolutional neural network model, nor does it limit the application scenarios of the original convolutional neural network model, that is, the compression method of the above-mentioned convolutiona...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a convolutional neural network model compression method and device, and the method comprises the following steps: S1, carrying out sparse training on an importance factor gamma of each convolution channel of an original convolutional neural network model by employing dynamic regularization constraint, enabling the effective weight of the model to be concentrated in a channel with a relatively large importance factor as much as possible, and obtaining a sparse convolutional neural network model; and S2, according to the size of the channel importance factor of the sparse convolutional neural network model, executing global channel pruning to cut off unimportant channels, and obtaining a compressed pruned convolutional neural network model.

Description

technical field [0001] The invention relates to the technical field of convolutional neural networks, in particular to a compression method and device for a convolutional neural network model. Background technique [0002] The field of machine vision is an important branch of artificial intelligence, and it is widely used in scenarios such as autonomous driving and security. Deep learning algorithms represented by convolutional neural networks perform well in tasks such as object detection, but convolutional neural network models are difficult to deploy on edge computing platforms where memory and computing resources are scarce due to the huge amount of parameters and computing resource overhead. In practical applications, in addition to the high-precision requirements of the algorithm, a large number of application fields also have strict requirements on the running speed of the algorithm. Under the condition of certain computing resources, there is a contradiction between...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06N3/08G06N3/04
CPCG06N3/082G06N3/045
Inventor 付宇卓刘婷颜伟
Owner SHANGHAI JIAO TONG UNIV
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More