Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Neural network model compression method and device

A technology of neural network model and compression method, which is applied in the field of neural network model compression method and device, and can solve the problems of limited compressed size, large loss of model data, insufficient sampling, etc.

Active Publication Date: 2017-03-08
BEIJING BAIDU NETCOM SCI & TECH CO LTD
View PDF8 Cites 83 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] However, the existing problems are: the above-mentioned compression method based on half-precision only compresses the floating-point number to 16 bits, which makes the compressed size limited; the above-mentioned quantization method based on random sampling, due to the randomness of the sampling, makes the sampling The point will fall near the peak, and the sampling of important and large-value elements may be insufficient; in the above-mentioned linear-based quantization method, linear quantization treats large weights and small weights on average, and does not sample more in places where data distribution is dense , leading to large loss of model data and poor compression effect

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Neural network model compression method and device
  • Neural network model compression method and device
  • Neural network model compression method and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0024] Embodiments of the present invention are described in detail below, examples of which are shown in the drawings, wherein the same or similar reference numerals designate the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the figures are exemplary and are intended to explain the present invention and should not be construed as limiting the present invention.

[0025] The neural network model compression method and device according to the embodiments of the present invention will be described below with reference to the accompanying drawings.

[0026] figure 1 is a flowchart of a neural network model compression method according to an embodiment of the present invention. It should be noted that the neural network model compression method of the embodiment of the present invention can be applied to the neural network model compression device of the embodiment of the present invention. ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a neural network model compression method and a neural network model compression device. The neural network model compression method comprises the steps of: determining a model parameter set of each neuron layer for each neuron layer in a neural network model, wherein the model parameter set includes a plurality of model parameters; carrying out first transformation on the plurality of model parameters to generate a plurality of intermediate parameters; quantizing the plurality of intermediate parameters according to a preset quantization step size to obtain a plurality of quantization parameters; selecting a plurality of sampling quantization points from the plurality of quantization parameters according to a preset quantization bit number; generating quantized values of the plurality of model parameters according to values of the plurality of quantization parameters and the plurality of sampling quantization points; and compressing and storing the plurality of model parameters according to the quantized values. The neural network model compression method can better maintain the effect of the model, greatly reduces the size of the neural network model, and reduces the occupation of computing resources, especially memory resources.

Description

technical field [0001] The invention relates to the field of computer technology, in particular to a neural network model compression method and device. Background technique [0002] At present, with the increase of training data in deep neural network, in order to better learn the characteristics of training data to improve the effect, the parameters used to represent the model increase rapidly, so the consumption of computing resources also increases rapidly, which will restrict the deep neural network. Network application scenarios, such as mobile phones and other devices with limited computing resources. [0003] In related technologies, the size of the model is usually reduced by quantizing and compressing the neural network model. At present, there are usually the following two compression methods for neural network models: 1) Half-precision-based compression methods, the principle of which is to compress the floating-point numbers represented by 32bit (bits) in the n...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/02G06N3/06H03M1/54H03M1/38
CPCG06N3/02G06N3/06H03M1/38H03M1/54
Inventor 朱志凡冯仕堃周坤胜石磊何径舟
Owner BEIJING BAIDU NETCOM SCI & TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products