Convolutional neural network model compression method, apparatus and device, and medium

A convolutional neural network and convolutional network technology, applied in biological neural network models, neural learning methods, neural architectures, etc., can solve problems such as DNN's inability to apply to storage and systems or equipment with limited computing power, and reduce computer costs. The effect of storage space, reducing its size, improving accuracy and performance

Pending Publication Date: 2022-04-15
INST OF MICROELECTRONICS CHINESE ACAD OF SCI
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The embodiment of the present application provides a convolutional neural network model compression method, device, device, and medium to solve the problem that complex DNNs in the prior art will occupy a large amount of storage resources and computing resources of servers or other devices, and that complex DNNs cannot Applied to technical problems in systems or devices with limited storage and computing power, it realizes streamlined DNN, retains or even improves DNN performance, enables DNN to reduce occupied storage resources and computing resources, and enables DNN to be applied to systems with limited storage and computing power Technical effects in a system or device

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Convolutional neural network model compression method, apparatus and device, and medium
  • Convolutional neural network model compression method, apparatus and device, and medium
  • Convolutional neural network model compression method, apparatus and device, and medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0045] The embodiments of the present application provide a method for compressing a convolutional neural network model, which solves the technical problem in the prior art that complex DNNs cannot be applied to systems or devices with limited storage and computing capabilities.

[0046] The technical solution of the embodiment of the present application is to solve the above-mentioned technical problems, and the general idea is as follows:

[0047] A method for compressing a convolutional neural network model, the method comprising: setting a mask for each parameter group in each layer of the convolutional network model to be compressed, generating a model to be trained with a mask; wherein, the initial mask of each parameter group The values ​​are all 1; N rounds of periodical training are performed on the model to be trained, where N is a positive integer, and a network sparse step is performed on the first model obtained from training. The network sparse step includes: dete...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a convolutional neural network model compression method and device, equipment and a medium, and the method comprises the steps: setting a mask for each parameter group in each layer of a to-be-compressed convolutional network model, and generating a to-be-trained model with the mask; executing N rounds of periodic training on the to-be-trained model, executing a network sparse step on a first model obtained through training, when a second model meets a preset model pruning condition, judging whether the second model meets a training termination condition, if not, taking the second model as the to-be-trained model, and repeatedly executing N rounds of periodic training on the to-be-trained model, executing a network sparse step on the first model obtained by training until an obtained second model meets a training termination condition; and obtaining a compressed convolutional network model based on the mask value of each parameter group in each layer of the second model obtained by training. The method not only reduces the size of the model, but also retains and even improves the original precision and performance of the to-be-compressed convolutional neural network.

Description

technical field [0001] The present invention relates to the technical field of deep learning, in particular to a convolutional neural network model compression method, device, equipment and medium. Background technique [0002] In recent years, with the continuous development and breakthrough of deep learning, the structure of Deep Neural Networks (DNN, Deep Neural Networks) has become deeper and deeper, from a network structure of more than a dozen layers to a network structure of hundreds of layers. The characteristics of the network indicate that the ability of the neural network has been greatly improved, which in turn has enabled the neural network to achieve better performance in various tasks. For example: image classification, target detection, image segmentation, natural language processing, etc. However, on the other hand, as DNN becomes deeper and more complex, the neural network model has more and more parameters, often tens of billions of parameters, and the si...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/04G06N3/08
Inventor 刁华彬
Owner INST OF MICROELECTRONICS CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products