Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A neural network structured pruning compression optimization method for a convolutional layer

A network structure and optimization method technology, applied in the direction of biological neural network model, neural architecture, etc., can solve the problems of not being able to use large training data sets smoothly, not being able to save computing and storage resources, and consuming large computing resources to achieve large Computing acceleration potential, convenient operation, and storage-saving effects

Inactive Publication Date: 2019-06-14
XI AN JIAOTONG UNIV
View PDF0 Cites 33 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The present invention is mainly aimed at further improving the pruning of the second compression method. The idea of ​​structured pruning is also used in the prior art scheme. Its method is to use multiple convolution filters for each convolutional layer, and these The type of convolutional filter is obtained through training; the existing method not only has a long training cycle and consumes huge computing resources (resulting in the inability to use large training data sets smoothly), but also such structured pruning cannot be performed in the forward direction of the model. The calculation process saves more computing and storage resources

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A neural network structured pruning compression optimization method for a convolutional layer
  • A neural network structured pruning compression optimization method for a convolutional layer
  • A neural network structured pruning compression optimization method for a convolutional layer

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0033] The present invention will be further described in detail below in conjunction with the drawings and specific embodiments.

[0034] See figure 1 , figure 1 Shown is the schematic diagram of the entire structured pruning compression optimization principle. In an embodiment of the present invention, a structured pruning compression optimization method for a deep neural network convolutional layer, specific steps include: sparse value distribution of each convolutional layer and structured pruning.

[0035] (1) The sparse value distribution steps of each convolutional layer are as follows: First, the original model is trained to obtain the parameter data of each prunable convolutional layer, and the single-layer importance score is calculated. Score M for the importance of each layer l Sum up to get M, calculate the overall importance of each layer Follow D from small to large l Perform sequential ranking, according to D l The maximum and minimum values ​​are divided into eq...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a neural network structured pruning compression optimization method for convolutional layers, and the method comprises the steps: (1), carrying out the sparse value distribution of each convolutional layer: (1.1) training an original model, obtaining the weight parameter of each convolutional layer capable of being pruned, and carrying out the calculation, and obtaining theimportance score of each convolutional layer; (1.2) according to the sequence of importance scores from small to large, carrying out average scale segmentation by referring to the maximum value and the minimum value, carrying out sparse value configuration from small to large on the convolution layers of all the sections in sequence, and through model training adjustment, obtaining sparse valueconfiguration of all the convolution layers capable of being pruned; (2) structured pruning: selecting a convolution filter according to the sparse value determined in the step (1.2), and carrying outstructured pruning training; Wherein only one convolution filter is used for each convolution layer. According to the optimization method provided by the invention, the deep neural network can be more conveniently operated on a resource-limited platform, so that the parameter storage space can be saved, and the model operation can be accelerated.

Description

Technical field [0001] The invention belongs to the field of computer artificial intelligence, deep neural network optimization technology and picture recognition technology, and particularly relates to a neural network structured pruning compression optimization method for convolutional layers. Background technique [0002] In the field of artificial intelligence, deep neural networks are one of the cornerstones, and its complexity and portability directly affect the application of artificial intelligence in life. Research on the acceleration and compression optimization of deep networks can make artificial intelligence more convenient to realize and more convenient to serve life. [0003] At present, common deep network acceleration and compression methods are as follows: 1. Low-Rank: low-rank decomposition; 2. Pruning: pruning, pruning methods are divided into: structured pruning, nuclear pruning, gradient Pruning, a wide range of applications; 3.Quantization: quantization, qua...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06N3/04
Inventor 梅魁志张良张增薛建儒鄢健宇常藩张向楠王晓陶纪安
Owner XI AN JIAOTONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products