Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Quantitative calculation method and system for convolutional neural network

A convolutional neural network and calculation method technology, applied in the field of neural network algorithm hardware implementation, can solve problems such as low accuracy, large array power consumption, and insufficient computing power, so as to improve speed, reduce calculation power consumption, and increase The effect of throughput

Active Publication Date: 2020-04-10
HEFEI HENGSHUO SEMICON CO LTD
View PDF7 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In the existing NOR Flash-based storage and computing integrated computing array, some control circuits are complex, resulting in high power consumption and insufficient computing power; some Flash unit threshold voltages are only divided into high and low binary values, and the corresponding control circuits are simple and easy to construct. A large-scale Flash computing array with low computing power and low power consumption, but the calculation accuracy is low and the accuracy rate is not high

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Quantitative calculation method and system for convolutional neural network
  • Quantitative calculation method and system for convolutional neural network
  • Quantitative calculation method and system for convolutional neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0085] Such as Figure 7 , this embodiment takes the AlexNet network as an example to further illustrate the convolutional neural network quantization calculation method of the present invention:

[0086] The AlexNet network structure has a total of 8 layers, the first 5 layers are convolutional layers, and the last three layers are fully connected layers; the convolutional neural network quantization calculation method accelerates the network is divided into three steps:

[0087] The first step is to perform high-precision quantization on the first layer of convolution (Conv1), the second layer of convolution (conv2), the penultimate fully connected layer (Fc7), and the last layer of fully connected layer (Fc8), and the third Convolution layer (Conv3), convolution layer 4 (Conv4), convolution layer 5 (Conv5), layer 6 full connection (Fc6) for binarization, whether the accuracy rate after the software simulation runs meets the requirements, For example, whether the accuracy o...

Embodiment 2

[0094] Such as Figure 8The present embodiment takes the LeNet network as an example, and further explains the quantitative calculation method of the convolutional neural network of the present invention: the LeNet network is simple and has only 7 layers, including a convolutional layer (Conv), a pooling layer (pool) and a fully connected layer (Fc ), the convolutional neural network quantitative calculation method to accelerate the network includes three steps:

[0095] The first step is to divide the layers with a large amount of calculation (convolutional layer, fully connected layer) into binary quantization and high-precision quantization. In general, the first layer of convolution (Conv1) and the last layer of full connection The connection layer (Fc2) is quantized with high precision, and the second layer of convolution (Conv2) and the first layer of fully connected layer (Fc1) are binarized and quantized. After the software simulation runs, whether the accuracy rate me...

Embodiment 3

[0102] Such as Figure 9 , this embodiment takes the DeepID1 network as an example to further illustrate the quantitative calculation method of the convolutional neural network of the present invention:

[0103] The DeepID1 neural network model used to extract facial features in the face recognition algorithm is mainly composed of a convolutional layer (Conv), a pooling layer (pool) and a fully connected layer (Fc). The convolutional neural network quantization calculation method is accelerated. The network consists of three steps:

[0104] The first step is to divide the layers with a large amount of calculation (convolutional layer, fully connected layer) into binary quantization and high-precision quantization. In general, the first layer of convolution (Conv1) and the last layer of full connection The connection layer (Fc) is quantized with high precision, and the second layer of convolution (Conv2), the third layer of convolution (Conv3), and the fourth layer of convolut...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to the field of neural network algorithm hardware implementation, and discloses a quantitative calculation method and system for a convolutional neural network. The quantitativecalculation method comprises the steps of: allowing all calculation layers of a convolutional neural network to be respectively matched and quantized in a multi-valued quantification mode and a multi-bit quantification mode according to the calculation precision and calculation capability requirements, allowing the calculation layers after multi-bit quantification to be mapped to a high-precisionarray, and carrying out high-precision calculation; and mapping the calculation layers after multi-bit quantification to a high-calculation-power array, performing high-calculation-power calculation,and completing calculation of the convolutional neural network according to a high-precision calculation result and a high-calculation-power calculation result in combination with non-calculation layers. According to the invention, the reasoning speed of the convolutional neural network is increased; the accuracy is ensured; meanwhile, the network power consumption is reduced as much as possible;and high practical value and wide application prospect are achieved.

Description

technical field [0001] The invention relates to the technical field of neural network algorithm hardware implementation, in particular to a convolutional neural network quantization calculation method and system. Background technique [0002] Convolutional neural networks have shown great advantages in image recognition, object detection, and many machine learning applications. The convolutional neural network is mainly composed of a convolutional layer, a pooling layer, and a fully connected layer cascade. It mainly has the following operations, namely, the convolution operation between the pixel block and the convolution kernel, the activation operation for introducing nonlinearity, The downsampling operation (i.e., pooling) and full connection operation on the feature map to reduce the feature value. Among them, most of the calculations are in the convolutional layer and the fully connected layer. [0003] Large-scale convolutional neural networks have huge parameter se...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06N3/04G06N3/063
CPCG06N3/063G06N3/045
Inventor 李政达任军郦晨侠吕向东盛荣华徐伟明徐瑞
Owner HEFEI HENGSHUO SEMICON CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products