Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Flexible deep learning network model compression method based on channel gradient pruning

A channel and gradient technology, applied in the field of flexible deep learning network model compression, can solve the problems of inability to predict the number of floating-point operations per second, and cannot directly reflect the actual compression rate, so as to improve the accuracy of pruning and strong predictability Effect

Pending Publication Date: 2021-02-23
ZHEJIANG UNIV OF TECH
View PDF1 Cites 15 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, after this method transforms the structure of the convolutional layer + BN layer into a fully connected layer structure, the introduction of hyperparameters to control the compression rate cannot directly reflect the actual compression rate, that is, it is impossible to predict the final number of floating-point operations per second before the pruning ends. (Floating-point Operations Per second, FLOPs) compression ratio
In addition, the pruned channel no longer extracts features, which means that the pruned model will inevitably experience a certain degree of performance degradation

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Flexible deep learning network model compression method based on channel gradient pruning
  • Flexible deep learning network model compression method based on channel gradient pruning
  • Flexible deep learning network model compression method based on channel gradient pruning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0029] In order to understand the purpose, features and advantages of the present invention more clearly, the technical solutions of the present invention will be described in detail below in conjunction with the accompanying drawings and specific embodiments.

[0030] The present invention is a flexible deep learning network model compression method based on channel gradient pruning, the specific process is as follows:

[0031] Step 1: Obtain the deep convolutional neural network model to be pruned

[0032] Step 1-1: Add mask layer constraints to the convolutional layers that need to be pruned in the original deep convolutional neural network. Import the pre-trained weight parameters obtained from the network into the original deep convolutional neural network with added constraints to form a deep convolutional neural network model to be pruned. The number of channels in the masking layer corresponds to the number of channels in the convolutional layer considering pruning, a...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a flexible deep learning network model compression method based on channel gradient pruning, and the method comprises the steps: 1, adding a masking layer constraint to an original network, and obtaining a to-be-pruned deep convolutional neural network model; wherein the absolute value of the product of the channel gradient and the weight serves as an importance standard toupdate the masking layer constraint of the channel to obtain a mask and a sparse model, 3, carrying out pruning operation on the sparse model based on the mask, and 4, retraining a compact deep convolutional neural network model. The invention further provides an application effect of the flexible deep learning network model compression method based on channel gradient pruning on an actual objectrecognition APP, the recognition speed of the model to the object after pruning is greatly improved, and the problem that the deep neural network model cannot be applied to the actual object recognition APP due to high storage space occupation and high memory occupation, high computing resources are occupied and cannot be deployed to embedded devices, smart phones and other devices are solved, and the application range of the deep neural network is expanded.

Description

technical field [0001] The invention relates to a flexible deep learning network model compression method, in particular to the practical application of realizing object recognition on smart phone APP. Background technique [0002] Deep learning has been applied in multiple tasks such as image recognition, target detection, image segmentation, speech recognition, QA question answering, etc., and has achieved better results than traditional methods. However, the excellent performance of deep neural network is closely related to its complex structure and its own huge amount of parameters. And with the advancement of technology, the field of deep learning has a development trend that the structure of neural network models tends to be more complex. Therefore, high-performance deep neural network models have very high requirements for hard disk storage space, memory bandwidth and platform computing resources. The contradiction between the high resource requirements of the deep ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06N3/08G06N3/04
CPCG06N3/082G06N3/045
Inventor 禹鑫燚戎锦涛欧林林张铭扬林密
Owner ZHEJIANG UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products