Unlock instant, AI-driven research and patent intelligence for your innovation.

Neural network quantization compression method and system

A neural network, quantization compression technology, applied in the field of neural network computing, can solve the problems of increasing the average code length of Huffman coding, and achieve the effect of increasing input randomness, reducing complexity, and avoiding deadlock

Pending Publication Date: 2022-07-01
INST OF COMPUTING TECH CHINESE ACAD OF SCI
View PDF6 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0016] (1) Compared with the original normal distribution, the frequency of occurrence of each number after retraining is more even, which may lead to an increase in the average code length of Huffman coding. Therefore, the data must be pre-coded before entropy coding. Improve the Huffman encoding of the frequency of input characters

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Neural network quantization compression method and system
  • Neural network quantization compression method and system
  • Neural network quantization compression method and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0067] In order to make the above-mentioned features and effects of the present invention more clearly and comprehensible, the following specific embodiments are given and described in detail in conjunction with the accompanying drawings in the description as follows.

[0068] In order to optimize the neural network compression method, this paper analyzes the distribution characteristics of the data in the neural network model after pruning and quantization, and proposes a lossless compression algorithm that combines entropy coding, run-length coding and all-zero coding. The hardware deployment form has been fully explored, and finally the NNcodec neural network encoding and decoding simulator is designed and implemented. The optimization effect of the hybrid coding on the neural network compression method is proved, and an easy-to-implement hardware design scheme is also given.

[0069] In order to make the above-mentioned features and effects of the present invention more cl...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a neural network quantization compression method, which comprises the following steps: obtaining to-be-compressed neural network data after quantization processing, and carrying out run-length all-zero coding on the neural network data to obtain run-length compressed data, the run-length all-zero coding comprising only carrying out run-length coding on zero characters in the neural network data; and performing normalized Huffman coding on the run length compression data, and reforming a coding result to obtain a normalized Huffman code as a compression result of the neural network data. Aiming at the characteristic that quantized neural network data has sparsity, run-length coding is improved, run-length all-zero coding is provided, and the neural network data can be compressed in a lossless mode more efficiently; the Huffman tree is reformed from top to bottom, storage of a complete Huffman tree structure is omitted, and the complexity of table look-up operation is remarkably reduced.

Description

technical field [0001] The invention relates to the field of neural network operations, and in particular to a method and system for quantizing and compressing neural networks based on hybrid coding. Background technique [0002] In recent years, artificial intelligence has developed rapidly in the context of the double explosion of information volume and hardware computing power, and has increasingly become the main driving force for productivity development and technological innovation. As the main branch of artificial intelligence technology, in order to further improve the accuracy of the model, the neural network algorithm has encountered technical bottlenecks such as complex structure, huge amount of parameters and large amount of calculation, which limits the application of neural network models in the pursuit of throughput rate and energy efficiency ratio. Therefore, computational efficiency becomes the main research goal in the next stage. The most effective neural...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): H04N19/122H04N19/124H04N19/13H04N19/42G06N3/08
CPCH04N19/42H04N19/124H04N19/13H04N19/122G06N3/082Y02D10/00
Inventor 何皓源王秉睿支天郭崎
Owner INST OF COMPUTING TECH CHINESE ACAD OF SCI