Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Neural network model compression method and apparatus, storage medium and electronic device

A technology of neural network model and compression method, which is applied in the field of computer storage media and electronic equipment, devices, and neural network model compression method, which can solve the problems of long feed-forward time and limit the application of neural network, etc., and achieve the effect of strong feature extraction ability

Inactive Publication Date: 2018-06-29
BEIJING SENSETIME TECH DEV CO LTD
View PDF0 Cites 25 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, due to its depth and network parameters, the network feed-forward time of deep neural networks is usually long, which to some extent limits the application of neural networks in devices with limited computing resources.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Neural network model compression method and apparatus, storage medium and electronic device
  • Neural network model compression method and apparatus, storage medium and electronic device
  • Neural network model compression method and apparatus, storage medium and electronic device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0046] figure 1 It is a flowchart showing a neural network model compression method according to Embodiment 1 of the present invention.

[0047] refer to figure 1 , in step S110, the first neural network model is acquired.

[0048] The first neural network model here may be a trained neural network model. That is to say, the neural network model compression method according to Embodiment 1 of the present invention is suitable for compressing any general neural network model.

[0049] In the embodiment of the present invention, the training of the first neural network model is not limited, and the first neural network model can be pre-trained by any traditional network training method. According to the functions, characteristics and training requirements to be realized by the first neural network model, the first neural network model can be pre-trained using a supervised learning method, an unsupervised method, a reinforcement learning method or a semi-supervised method.

...

Embodiment 2

[0060] figure 2 It is a flow chart showing a neural network model compression method according to Embodiment 2 of the present invention.

[0061] refer to figure 2 , in step S210, the first neural network model is acquired. The processing of this step is similar to the processing of the aforementioned step S110, and will not be repeated here.

[0062] According to an optional implementation manner of the present invention, in step S220, according to the receptive field of a convolution layer with a larger convolution kernel in the first neural network model, a convolution layer with a larger convolution kernel is used to convolve Multiple convolutional layers with smaller kernels are equivalently replaced to obtain the second neural network model.

[0063] For the first neural network model, which usually includes convolutional layers, one convolutional layer with a large kernel can be replaced by multiple convolutional layers with a smaller kernel for the convolutional l...

Embodiment 3

[0095] image 3 It is a flowchart showing a neural network model compression method according to Embodiment 3 of the present invention.

[0096] refer to image 3 , in step S310, the first neural network model is acquired. The processing of this step is similar to the processing of the aforementioned step S110, and will not be repeated here.

[0097] In step S320, maintain or increase the depth of the first neural network model and compress at least one network parameter of at least one network layer of the first neural network model to obtain a second neural network model.

[0098] The processing of this step is similar to the processing of the aforementioned step S120 or step S220, and will not be repeated here.

[0099] Since the second neural network model is a network generated after the first neural network model is compressed, its network parameters are reduced, in order to improve the feature expression ability of the second neural network model, in the neural netwo...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

Embodiments of the invention provide a neural network model compression method and apparatus, a storage medium and an electronic device. The method comprises the steps of obtaining a first neural network model; keeping or increasing the depth of the first neural network model and compressing at least one network parameter of at least one network layer of the first neural network model to obtain asecond neural network model; and based on a sample data set and at least according to an output of the first neural network model, training the second neural network model. Therefore, the compressed neural network model having good feature extraction capability and performance equivalent to that of the uncompressed neural network model can be obtained by training; and the method has universality and is suitable for the neural network model realizing any function.

Description

technical field [0001] Embodiments of the present invention relate to artificial intelligence technology, and in particular to a neural network model compression method, device, computer storage medium, and electronic equipment. Background technique [0002] In recent years, the application of deep neural networks in many tasks such as computer vision and natural language processing has made a breakthrough in performance. In the deep neural network model, a large number of network weights are exchanged for strong expressive ability, so as to achieve strong performance. For example, the size of the AlexNet model exceeds 200MB, and the size of the VGG-16 model exceeds 500MB. However, due to its depth and network parameters, the network feed-forward time of deep neural networks is usually long, which to some extent limits the application of neural networks in devices with limited computing resources. Contents of the invention [0003] An embodiment of the present invention ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/04G06N3/08
CPCG06N3/084G06N3/045
Inventor 王飞
Owner BEIJING SENSETIME TECH DEV CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products