Supercharge Your Innovation With Domain-Expert AI Agents!

Step-by-step training method for nonlinear quantitative deep neural network

A technology of deep neural network and nonlinear quantization, which is applied in the field of step-by-step training for nonlinear quantized deep neural network, which can solve the problems that the local optimal solution is not the global optimal solution, the loss of quantized network accuracy, and sensitivity to outliers, etc. , to achieve the effect of avoiding network performance loss, reducing errors, and reducing impact

Pending Publication Date: 2020-11-13
BEIJING INSTITUTE OF TECHNOLOGYGY
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, most current quantization methods adopt linear quantization, which is very sensitive to outliers in the data and easily causes large quantization errors; in addition, during training, these methods quantize feature maps and weight parameters at the same time, and training is prone to fall into local optimum. optimal solution rather than the global optimal solution
These two problems will cause a certain loss of precision in the quantized network, especially the low bit width quantized network.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Step-by-step training method for nonlinear quantitative deep neural network
  • Step-by-step training method for nonlinear quantitative deep neural network
  • Step-by-step training method for nonlinear quantitative deep neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0015] The present invention will be described in detail below with reference to the accompanying drawings and examples.

[0016] The invention provides a step-by-step training method for nonlinear quantization deep neural network, the specific implementation process is as follows:

[0017] The training process of quantized deep convolutional neural network can be regarded as an optimization problem, which can be expressed by the following formula:

[0018]

[0019] in, is the network loss function, x is the input image, w is the weight parameter, Q A is the feature map quantization function, Q W is the weight parameter quantization function, and k is the quantization bit width. The present invention decomposes the above-mentioned training process into three steps, and the specific implementation process is as follows:

[0020] Step 1. Weight parameter nonlinear transformation network training

[0021] First, train a weight parameter nonlinear transformation network. ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a step-by-step training method for a nonlinear quantization deep neural network, and the method enables a quantization deep convolutional neural network training process to bedivided into a plurality of steps, achieves the gradual quantification of weights and feature maps, and reduces the quantization network training difficulty. Meanwhile, nonlinear quantization is carried out on network parameters in the training process, quantization errors are reduced, and the network computing performance is improved.

Description

technical field [0001] The invention belongs to the field of compression and acceleration of deep convolutional neural networks, and in particular relates to a step-by-step training method for nonlinear quantization deep neural networks. Background technique [0002] In recent years, deep convolutional neural networks have made breakthroughs in various image intelligent processing such as target detection and classification recognition, and have been widely used in many fields such as autonomous driving, mobile phone image processing, and remote sensing image processing. There are a large number of network weight parameters and calculations in the deep convolutional neural network, so high-performance devices such as CPUs and GPUs are often used as implementation platforms for algorithm deployment. However, the high power consumption overhead of these devices makes it difficult to apply to application scenarios where computing resources and power consumption are strictly lim...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06N3/04G06N3/08
CPCG06N3/084G06N3/08G06N3/045
Inventor 陈禾魏鑫刘文超龙腾陈亮
Owner BEIJING INSTITUTE OF TECHNOLOGYGY
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More