Balanced binarization neural network quantification method and system

A binary neural and neural network technology, applied in neural learning methods, biological neural network models, neural architectures, etc., can solve the problem of model storage consumption and consumption not being well handled, and achieve improved classification performance and activation quantization loss Minimize and reduce the effect of quantization loss

Pending Publication Date: 2019-11-19
BEIHANG UNIV
View PDF1 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] However, the consumption caused by the storage occupation ...

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Balanced binarization neural network quantification method and system
  • Balanced binarization neural network quantification method and system
  • Balanced binarization neural network quantification method and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0050] The technical content of the present invention will be described in detail below in conjunction with the accompanying drawings and specific embodiments.

[0051] Quantization-based neural network compression acceleration methods can represent weights and activations in the network with very low precision, and the extreme case of quantizing weights and activations to one-bit values ​​can enable neural networks to effectively implement traditional convolution operations through bit-by-bit operations. , enabling small storage and fast inference. The full binarization of the convolutional neural network model can minimize the storage occupation of the model and the amount of calculation of the model, greatly saving the storage space of the parameters, and at the same time convert the calculation of the original parameters from floating-point operations to bit operations, which is very convenient. Dadi speeds up the inference process of the neural network and reduces the amo...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a balanced binarization neural network quantification method and system. The method comprises the following steps of S1, performing balance standard binarization operation on aweight in a neural network to obtain a binarized weight; S2, performing balanced binarization operation on the activation value in the neural network to obtain a binarized activation value; and S3, executing the steps S1 and S2 on the convolutional layer in the network in the iterative training process of the neural network to generate a balanced binary neural network. The balanced and standardized binarization network weight and the balanced and binarization network activation value are used, so that the neural network can achieve activation value information entropy maximization and weightand activation quantization loss minimization by minimizing a loss function in the training process, the quantization loss is reduced, and the classification performance of the binarization neural network is improved.

Description

technical field [0001] The invention relates to a method for quantizing a balanced binary neural network, and at the same time relates to a system for quantifying a neural network for realizing the method, which belongs to the technical field of deep learning. Background technique [0002] Deep neural networks (DNNs), especially deep convolutional neural networks (CNNs), have been well-proven in various computer vision applications, such as image classification, object detection, and visual segmentation. Traditional CNNs usually have a large number of parameters and high-performance computing requirements, and the training and inference process for a task takes a lot of time. The main reason for this problem is that the current models that achieve the best results on various tasks generally use convolutional neural networks with great depth and breadth, so that the storage model needs to use a large amount of storage resources, and the training and inference process A huge ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06N3/04G06N3/08
CPCG06N3/084G06N3/045
Inventor 刘祥龙沈明珠秦浩桐
Owner BEIHANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products