Unlock instant, AI-driven research and patent intelligence for your innovation.

Neural network model compression method and device

A technology of neural network model and compression method, which is applied in the field of neural network model compression method and device, which can solve the problem that the forward reasoning calculation speed of the model storage space cannot meet the online requirements, etc., and achieve the effect of quantitative compression and space reduction

Pending Publication Date: 2020-03-27
BEIJING SOGOU TECHNOLOGY DEVELOPMENT CO LTD
View PDF0 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

It can be seen that with such storage and computing requirements, on smart devices, especially low-end smart devices, the model storage space and forward inference calculation speed cannot meet the online requirements.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Neural network model compression method and device
  • Neural network model compression method and device
  • Neural network model compression method and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0067] In order to enable those skilled in the art to better understand the solutions of the embodiments of the present invention, the embodiments of the present invention will be further described in detail below with reference to the accompanying drawings and embodiments.

[0068] Usually, in building a neural network model, all data involved in the calculation is calculated and stored in a 32-bit full-precision floating-point format, namely float32. When the scale of the network model is large, the required memory resources will be very huge, and the floating point number is composed of three parts: one sign bit, eight exponent bits and mantissa bits. The operation process of completing floating-point addition and subtraction is roughly divided into four steps:

[0069] 1.0 Operand check, that is, if at least one of the numbers involved in the operation is zero, the result can be obtained directly;

[0070] 2. Compare the order code size and complete the order matching;

...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a neural network model compression method and device. The method comprises the steps of training neural network model parameters; obtaining a model parameter set of each layerof the neural network model, wherein the model parameter set comprises a plurality of model parameters; determining quantification parameters of each layer of the neural network model; for the model parameters of each layer, quantifying the model parameters by utilizing the quantification parameters of the layer to obtain quantized model parameters; and compressing and storing the neural network model according to the quantified model parameters. According to the invention, the space occupied by storage of the neural network model can be greatly reduced.

Description

technical field [0001] The invention relates to the field of artificial intelligence, in particular to a neural network model compression method and device. Background technique [0002] Deep learning technology is an important means of artificial intelligence technology. For example, the classic RNN (Recurrent Neural Network, cyclic neural network) model has important applications in the field of natural language processing. It is better for some applications such as input methods and speech recognition. For example, when inputting the pinyin string "feijizhengzaihuaxing", when the system lexicon size and N-gram language model are limited, the candidate given by the current mainstream input method is "the plane is in shape", and the introduction of LSTM (Long Short -Term Memory, long short-term memory network) After the recurrent neural network model, the candidate given by the input method is "the plane is taxiing". [0003] In the application of the RNN model, on the one...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06N3/04G06N3/08
CPCG06N3/04G06N3/08
Inventor 王丹张扬
Owner BEIJING SOGOU TECHNOLOGY DEVELOPMENT CO LTD
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More