Joint neural network model compression method based on channel pruning and quantitative training

A compression method and technology for training models, which are applied in the field of joint neural network model compression, and can solve the problems of large number of parameters of neural network models, difficult to deploy, and large amount of calculation.

Pending Publication Date: 2020-09-11
HARBIN INST OF TECH
View PDF0 Cites 28 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] In order to solve the problem that the neural network model has a large amount of parameters and a large amount of calculation, which makes it difficult to deploy on general computing equipment, the present invention designs a joint neu

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Joint neural network model compression method based on channel pruning and quantitative training
  • Joint neural network model compression method based on channel pruning and quantitative training
  • Joint neural network model compression method based on channel pruning and quantitative training

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0088] A joint neural network model compression method based on channel pruning and quantization training, the compression method comprises the following steps: channel pruning is to reduce the number of neural network channels; quantization training replaces floating-point number operations with integer operations

[0089] Step 1: Sparse the training model. During the training process, apply the L1 norm penalty to the BN layer parameters after the convolutional layer that needs to be sparse, so that the parameters have the characteristics of structured sparseness, and prepare for the next step of cutting the channel;

[0090] Step 2: Training model pruning. According to the corresponding relationship between the convolutional layer and the BN layer in the model, the pruning process cuts off the channel corresponding to the convolutional layer with a small γ parameter in the BN layer, and prunes each layer from shallow to deep. , thus forming a new model after channel pruning; ...

Embodiment 2

[0158] The improved YOLOv3 network is compressed using the pruning algorithm of the present invention. The improved YOLOv3 network structure uses Mobilenetv2 as the feature extractor, and then replaces the ordinary convolution with a depth-separable convolution to reduce the amount of calculation. The improved YOLOv3 network achieves a test set mAP of 78.46% on the VOC dataset. On a 512×512 input image, the calculation amount is 4.15GMACs, and the model parameter amount is 6.775M.

[0159] The above results were trained for 80 rounds on the VOC training set, using standard data enhancement methods, including random cropping, perspective transformation, and horizontal flip, and additionally using the mixup data enhancement method. Using the Adam optimization algorithm, the learning rate strategy of cosine annealing, the initial learning rate is 4e-3, and the batch size is 16. The following sparse training and fine-tuning training use the same hyperparameter settings.

[0160...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a joint neural network model compression method based on channel pruning and quantitative training. The method comprises the steps: 1, sparsifying a training model; step 2, training model pruning; 3, finely adjusting the model; 4, quantifying the model after pruning is finished, and constructing a conventional floating-point number calculation graph; step 5, inserting pseudo-quantization modules into corresponding positions of convolution calculation in the calculation graph, inserting two pseudo-quantization modules into a convolution weight and an activation value, and quantizing the weight and the activation value into 8-bit integer; 6, dynamically quantifying the training model until convergence; 7, performing quantitative reasoning; and 8, finally obtaining a pruned and quantified model. Through two technologies of pruning and quantification, the time and space consumption of the model is greatly reduced under the condition of maintaining the accuracy of the model.

Description

technical field [0001] The invention belongs to the technical field of data processing; in particular, it relates to a joint neural network model compression method based on channel pruning and quantization training. Background technique [0002] The existing neural network pruning algorithm can be mainly divided into three steps: sparse training, cutting out channels that have little influence, and fine-tuning on the data set; existing pruning algorithms often use Computes the average of the convolution filter parameters. However, this evaluation method only considers the influence of the convolution operation on the feature map, and does not consider the influence of the BN layer on the feature map. Therefore, the network pruned by this method has a large loss in performance. In terms of quantization, the existing methods are mainly static quantization methods after the model training is completed. This quantization method has a certain error in the quantized parameters...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06N3/08G06N3/04
CPCG06N3/082G06N3/045
Inventor 徐磊何林苏华友刘小龙罗荣张海涛李君宝
Owner HARBIN INST OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products