Method for quantitatively improving model precision through low-bit mixing precision

A technology of model accuracy and precision, applied in the field of image processing, can solve the problems of difficult to guarantee the structural accuracy of mobilenet models, and difficult to achieve full-precision accuracy.

Pending Publication Date: 2022-07-01
合肥君正科技有限公司
View PDF0 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] The process of low-bit training for the model, and in order to enable the model to run quickly on the embedded platform, the intermediate storage results of the convolution are generally stored in the bit width of int16, but if the model is quantized to 4bit, it is dif...

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for quantitatively improving model precision through low-bit mixing precision
  • Method for quantitatively improving model precision through low-bit mixing precision

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0053] In order to understand the technical content and advantages of the present invention more clearly, the present invention will now be further described in detail with reference to the accompanying drawings.

[0054] Through the calculation and analysis of the model channel, this method ensures that the int16-bit condition of the model does not exceed the boundary, and maximizes the quantization bit width and precision of each layer; Note: The model int16-bit value range is (-32768 to 32767) is In order to speed up the model inference during inference on the inference side, the value is limited within the range of int16; if the value range exceeds, the result of the model inference will be abnormal.

[0055] like figure 2 As shown, the present invention relates to a low-bit mixed-precision quantization method for improving model accuracy, and the method further includes the following steps:

[0056] S1, a deep learning network framework structure is formed, including th...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a method for quantitatively improving model precision through low-bit mixing precision, and the method comprises the steps: carrying out the qualitative analysis of a model network structure, and guaranteeing that reasoning does not cross a boundary through the calculation and analysis of a model channel; and the network model precision is improved in a mixed precision configuration form. According to the method, by configuring the mixing precision mode, it is ensured that the precision of the model is the same as that of 8 bits and full precision in the low-bit process. And the mixed precision quantization is carried out, and the feature and the weight of the network are qualitatively analyzed, so that the low-bit quantization precision of the model is higher.

Description

technical field [0001] The invention relates to the technical field of image processing, in particular to a method for quantizing and improving model precision with low-bit mixed precision. Background technique [0002] The invention belongs to the fact that the deep neural network is difficult to converge on the model training process based on low bit (4bit), and it is difficult to reach the full accuracy level of the model. By analyzing the number of channels of the model, combined with the quantization of the model, the quantization bit width is determined to further improve the quantization accuracy of the model. [0003] In the process of low-bit training of the model, and in order to enable the model to run quickly on the embedded platform, the intermediate storage results of the convolution are generally stored in the bit width of int16, but if the model is quantized in terms of weights and activations to 4bit, it is difficult to guarantee the accuracy of the structu...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06N3/04G06N3/08
CPCG06N3/08G06N3/045
Inventor 周飞飞
Owner 合肥君正科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products