Compression method and compression device of neural network model, storage medium and equipment

A compression method and convolutional neural network technology, applied in biological neural network models, neural learning methods, neural architectures, etc., can solve problems such as incomplete network models, excessive compression of convolutional layers, and low model accuracy, and improve Operation speed, high accuracy, and the effect of ensuring integrity

Active Publication Date: 2020-09-25
SHENZHEN INST OF ADVANCED TECH
View PDF6 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the disadvantage of the existing transfer learning method is that it can only provide a model with the same computational complexity as the neural network trained by the source data.
[0004] The compression algorithm of the convolutional neural network model is a common algorithm to reduce the complexity of the model, but the current compression algorithm has two shortcomings. One is that when the target data is insufficient, the correct rate of the trained model is low. Compression will cause some convolutional layers to be over-compressed, and even some convolutional layers will be completely compressed, which will cause the network model to be incomplete and the accuracy rate will drop sharply

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Compression method and compression device of neural network model, storage medium and equipment
  • Compression method and compression device of neural network model, storage medium and equipment
  • Compression method and compression device of neural network model, storage medium and equipment

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0067] In order to make the objectives, technical solutions and advantages of the present invention clearer, the following further describes the present invention in detail with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are only used to explain the present invention, but not to limit the present invention.

[0068] Before describing the technical solution of the present invention in detail, first briefly describe the inventive concept of the present application: the compression method of the present application combines migration learning and compression algorithms, that is, compressing while migrating, achieving complementary advantages, while ensuring a higher accuracy rate. , Reduce model complexity, increase calculation speed, and compress some convolutional layers, which can further reduce model complexity, while ensuring the integrity of the model, and avoiding a sudden drop in model accurac...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a convolutional neural network model compression method based on transfer learning. The compression method comprises the steps of obtaining a pre-trained transfer learning model; performing compression processing on each convolution layer of the transfer learning model according to a preset compression ratio to obtain a first target network model; performing transfer learning on the first target network model by using the target image data set to obtain a first compressed target model; according to a predetermined rule, selecting a part of convolution layers in the first compression target model for compression processing to obtain a second target network model; and performing transfer learning on the second target network model by using the target image data set toobtain a second compressed target model. Advantages of transfer learning and convolution compression are complemented, on the premise that high accuracy is guaranteed, model complexity is reduced, operation speed is increased, compression processing is performed on part of convolution layers, the model complexity can be further reduced, and sudden drop of the model accuracy is avoided.

Description

Technical field [0001] The present invention belongs to the field of information technology, and in particular, relates to a compression method and a compression device for a convolutional neural network model based on transfer learning, a computer-readable storage medium, and computer equipment. Background technique [0002] The basic principle of transfer learning is to adapt a model trained on a problem to a new problem through simple adjustments. Due to the complexity of large-scale networks, training a complex convolutional neural network requires a lot of labeled data, and it takes a long time. Transfer learning is a workaround to solve the problem of large amounts of labeled data and training time. In the case of sufficient data, the effect of transfer learning is not as good as complete retraining. But the training time and the number of training samples required for migration learning are much less than training a complete model. And you can get good accuracy. [0003]...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/04G06N3/08
CPCG06N3/08G06N3/045Y02D10/00
Inventor 王卡风高希彤须成忠
Owner SHENZHEN INST OF ADVANCED TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products