Hierarchical pruning method based on layer recovery sensitivity

A sensitivity and pruning technology, applied in the field of deep neural network model compression, can solve problems such as unstructured sparsity, and achieve the effects of improving calculation speed, reducing model calculation amount, and high classification accuracy

Pending Publication Date: 2020-07-28
INST OF COMPUTING TECH CHINESE ACAD OF SCI
View PDF0 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Instead of a single weight on the filter, the pruning of a single weight is unstructured, which will cause unstructured sparsity, and requires a dedicated operating environment to achieve acceleration and compression
Therefore, the current mainstream research methods are mostly structured pruning.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Hierarchical pruning method based on layer recovery sensitivity
  • Hierarchical pruning method based on layer recovery sensitivity
  • Hierarchical pruning method based on layer recovery sensitivity

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0033] In order to make the objectives, technical solutions, design methods, and advantages of the present invention clearer, the following further describes the present invention in detail through specific embodiments with reference to the accompanying drawings. It should be understood that the specific embodiments described here are only used to explain the present invention, but not to limit the present invention.

[0034] The principle and embodiments of the present invention will be described in detail below with reference to the accompanying drawings.

[0035] The traditional layer pruning sensitivity analysis method is to cut only one layer at a time on the basis of the complete network, such as figure 1 Shown. figure 1 The middle is the traditional layer-by-layer pruning method through sensitivity analysis, for example, the VGG16 network model, each layer contains multiple filters, each filter has its own different weight, the traditional pruning method When the nth layer i...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a hierarchical pruning method based on layer recovery sensitivity. The method comprises the steps of S1, performing uniform pruning on each layer of a target neural network model; S2, performing layer recovery on the target neural network model after uniform pruning to obtain the performance contribution degree of each layer to the model; S3, grading each layer in the targetneural network model according to contribution degrees, and setting a pruning proportion for each stage; and S4, pruning the original target neural network model according to the set pruning proportion. The contribution degree of each layer to the model performance is judged more visually, efficiently and simply. Moreover, the oscillation problem caused by randomness during model parameter initialization can be greatly reduced, the model calculation amount is greatly reduced, the hardware requirement is reduced, the calculation speed is increased, the calculation energy consumption is reduced, and the equipment real-time performance is improved.

Description

Technical field [0001] The invention belongs to the field of artificial intelligence, and is particularly suitable for the compression of deep neural network models. Background technique [0002] Recently, deep neural networks have made tremendous developments in various fields. In order to pursue good model performance, various research institutions gradually design network models with more weights and deeper structures, which will inevitably cause redundancy. Although the model performance has been greatly improved, it is difficult to run such a huge network on mobile edge devices with limited resources. Therefore, it is of great significance to the study of deep neural network model compression. Pruning the existing network is one of the mainstream methods of model compression. [0003] The methods of deep neural network model compression mainly include the following: 1. Pruning: tailoring the existing network structure; 2. Knowledge distillation: using the structure informati...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/08G06N3/04
CPCG06N3/082G06N3/045
Inventor 李超徐勇军杨康严阳春
Owner INST OF COMPUTING TECH CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products