Unlock instant, AI-driven research and patent intelligence for your innovation.

Neural network quantization compression method and system for balancing compression speed between flows

A neural network and compression speed technology, applied in the field of neural network computing, can solve the problem of increasing the average code length of Huffman coding, and achieve the effects of increasing input randomness, balancing compression speed, and improving compression efficiency

Active Publication Date: 2022-07-01
INST OF COMPUTING TECH CHINESE ACAD OF SCI
View PDF8 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0016] (1) Compared with the original normal distribution, the frequency of occurrence of each number after retraining is more even, which may lead to an increase in the average code length of Huffman coding. Therefore, the data must be pre-coded before entropy coding. Improve the Huffman encoding of the frequency of input characters

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Neural network quantization compression method and system for balancing compression speed between flows
  • Neural network quantization compression method and system for balancing compression speed between flows
  • Neural network quantization compression method and system for balancing compression speed between flows

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0067] In order to make the above-mentioned features and effects of the present invention more clearly and comprehensible, the following specific embodiments are given and described in detail in conjunction with the accompanying drawings in the description as follows.

[0068] In order to optimize the neural network compression method, this paper analyzes the distribution characteristics of the data in the neural network model after pruning and quantization, and proposes a lossless compression algorithm that combines entropy coding, run-length coding and all-zero coding. The hardware deployment form has been fully explored, and finally the NNcodec neural network encoding and decoding simulator is designed and implemented. The optimization effect of the hybrid coding on the neural network compression method is proved, and an easy-to-implement hardware design scheme is also given.

[0069] In order to make the above-mentioned features and effects of the present invention more cl...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a neural network quantization compression method based on compression speed between equilibrium flows, which comprises the following steps of: 1, acquiring to-be-compressed neural network data after quantization processing, and partitioning the neural network data to obtain a plurality of data blocks; 2, a data stream is distributed to each data block to be compressed, compression of each data stream comprises the steps that run-length all-zero coding is conducted on the data blocks to obtain run-length compressed data, run-length all-zero coding only conducts run-length coding on zero characters in the neural network data, normalized Huffman coding is conducted on the run-length compressed data, and all-zero coding is conducted on the zero characters in the neural network data to obtain run-length compressed data; a coding result is reformed to obtain a standard Huffman code which is used as a compression result of the data block; the step 2 comprises the following sub-steps of: 21, monitoring the compressed and coded data volume of each data stream, and writing a virtual code corresponding to the virtual character into the output cache of the data stream with high coding speed at present. The compression speed between the streams is balanced, the coding gap between the assembly lines is reduced, and then deadlock is avoided.

Description

technical field [0001] The invention relates to the field of neural network computing, and in particular relates to a neural network quantization compression method and system for balancing the compression speed between streams. Background technique [0002] In recent years, artificial intelligence has developed rapidly in the context of the double explosion of information volume and hardware computing power, and has increasingly become the main driving force for productivity development and technological innovation. As the main branch of artificial intelligence technology, in order to further improve the accuracy of the model, the neural network algorithm has encountered technical bottlenecks such as complex structure, huge amount of parameters and large amount of calculation, which limits the application of neural network models in the pursuit of throughput rate and energy efficiency ratio. Therefore, computational efficiency becomes the main research goal in the next stag...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): H04N19/122H04N19/124H04N19/13H04N19/42G06N3/08
CPCH04N19/42H04N19/124H04N19/122H04N19/13G06N3/082Y02D10/00
Inventor 何皓源王秉睿支天郭崎
Owner INST OF COMPUTING TECH CHINESE ACAD OF SCI