Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method and device for deep neural network computing acceleration

A deep neural network, precomputing technique

Active Publication Date: 2021-01-29
BEIJING BAIDU NETCOM SCI & TECH CO LTD
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Among them, the matrix-vector multiplication in the matrix operation is memory-constrained, thus limiting the prediction speed of the deep neural network during the calculation process
However, the accuracy loss of the binary network is large
The pruning pruning algorithm requires a high degree of matrix sparsity, and the retraining process is complicated
Therefore, none of the existing calculation methods can well realize the calculation acceleration of the neural network.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method and device for deep neural network computing acceleration
  • Method and device for deep neural network computing acceleration
  • Method and device for deep neural network computing acceleration

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0058] In the following, only some exemplary embodiments are briefly described. As those skilled in the art would realize, the described embodiments may be modified in various different ways, all without departing from the spirit or scope of the present invention. Accordingly, the drawings and descriptions are to be regarded as illustrative in nature and not restrictive.

[0059] An embodiment of the present invention provides a method for accelerating calculation of a deep neural network, such as figure 1 shown, including the following steps:

[0060] S100: Sampling each input vector that needs to be input into the matrix model to obtain a plurality of sampling vectors.

[0061] S200: Perform product quantization on each sampling vector according to a preset quantization parameter to obtain multiple quantization points.

[0062] S300: Divide the matrix model into multiple matrix blocks according to the quantization parameter.

[0063] S400: Calculate each quantization point...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

Embodiments of the present invention propose a method, device, terminal, and computer-readable storage medium for computing acceleration of a deep neural network. The method includes: sampling each input vector that needs to be input into the matrix model to obtain multiple sampling vectors; The set quantization parameters are used to quantify the product of each sampling vector to obtain multiple quantization points; the matrix model is divided into multiple matrix blocks according to the quantization parameters; each quantization point and each matrix block are calculated to obtain multiple pre-computation tables; The calculation table calculates each input vector to obtain the calculation result of the matrix model. In the embodiment of the present invention, the pre-calculation table of the same matrix model only needs to be established once, and all input vectors that need to be calculated through the matrix model can use the pre-calculation table for table look-up calculation, which effectively saves the input vector and the matrix model. The calculation process can also maintain the original calculation effect of the matrix model.

Description

technical field [0001] The present invention relates to the technical field of data processing, in particular to a method, device, terminal and computer-readable storage medium for accelerating deep neural network calculations. Background technique [0002] Methods for speeding up deep neural networks in the prior art include matrix operations, pruning (pruning) algorithms, and binary networks. Among them, matrix-vector multiplication in matrix operations is memory-constrained, thus limiting the prediction speed of deep neural networks during computation. However, the accuracy loss of the binary network is relatively large. The pruning pruning algorithm requires a high degree of matrix sparsity, and the retraining process is complicated. Therefore, none of the existing calculation methods can well realize the calculation acceleration of the neural network. [0003] The above information disclosed in this Background section is only for enhancement of understanding of the b...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06N3/063G06F9/28
CPCG06F9/28G06N3/063
Inventor 朱志凡冯仕堃陈徐屹朱丹翔曹宇慧何径舟
Owner BEIJING BAIDU NETCOM SCI & TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products