Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Deep neural network accelerator based on hybrid precision storage

A deep neural network and accelerator technology, applied in the fields of calculation, counting, and computing, can solve problems such as operating power consumption limitation of neural network accelerator deployment, insufficient use of neural network advantages, and system stability to be verified, etc. Achieve the effect of compression and storage, reducing precision

Inactive Publication Date: 2020-02-07
SOUTHEAST UNIV
View PDF1 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Therefore, the design difficulties of deep neural network accelerators are attributed to two points: 1) The scale of deep neural networks is getting larger and larger, and the memory access problem has become the biggest bottleneck in neural network operations, especially when the size of the weight matrix is ​​larger than the cache capacity. , the advantages of the neural network cannot be fully utilized; 2) The structure of the deep neural network determines that its basic operations are a large number of multiplication and accumulation operations, and multiplication has always been an arithmetic operation that consumes a lot of hardware resources, has a long delay and consumes a lot of power. The speed and power consumption of operations determine the performance of deep neural network accelerators
[0004] Traditional deep neural network accelerators mainly improve the reliability and stability of the system by instantiating a large number of multiply-add computing units and storage units. A large amount of chip area and a large amount of operating power limit the use of neural network accelerators in portable interactive devices. Deployment in
In order to solve these problems, the most popular technology at present is to perform binary processing on weight data. This processing method can greatly simplify network computing data scheduling and memory access mode, but its network accuracy loss is large, and the system stability needs to be verified.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Deep neural network accelerator based on hybrid precision storage
  • Deep neural network accelerator based on hybrid precision storage
  • Deep neural network accelerator based on hybrid precision storage

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0015] The present invention is further illustrated below in conjunction with specific examples. It should be understood that these examples are only used to illustrate the present invention and are not intended to limit the protection scope of the present invention. After reading the present invention, various equivalents made by those skilled in the art Modifications of form all fall within the scope defined by the appended claims of this application.

[0016] The overall architecture of the deep neural network accelerator based on mixed precision storage in the present invention is as follows: figure 1 As shown, when working, the accelerator receives weights from offline training and compression, and under the control and scheduling of the control module, it completes the decoding of different precision weights, the operation of the fully connected layer and the activation layer. The deep neural network accelerator based on mixed-precision storage includes 4 on-chip cache m...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a deep neural network accelerator based on hybrid precision storage, and belongs to the technical field of calculation, reckoning and counting. The accelerator comprises an on-chip cache module, a control module, a bit-width-controllable multiply-add-batch calculation module, a nonlinear calculation module, a register array and a Huffman decoding module based on double lookup tables, effective bit and sign bit parameters of the weight are stored in the same memory, so that mixed-precision data storage and analysis are realized, and multiply-add operation of mixed-precision data and weight is realized. Through data storage analysis based on mixed precision and Huffman decoding based on a dual lookup table, compression and storage of data and weights under different precision are realized, data streams are reduced, and low-power-consumption data scheduling based on a deep neural network is realized.

Description

technical field [0001] The invention discloses a deep neural network accelerator based on mixed-precision storage, relates to the design of a digital-analog hybrid integrated circuit of an artificial intelligence neural network, and belongs to the technical field of calculation, calculation and counting. Background technique [0002] Deep neural network has been widely researched and applied due to its superior performance. The current mainstream deep neural network has hundreds of millions of connections, and its memory-intensive and computing-intensive characteristics make it difficult for them to be mapped to embedded systems with extremely limited resources and power consumption. In addition, the current trend of deep neural networks towards more accurate and powerful functions makes the scale of deep neural networks, the required storage space become larger and larger, and the computational overhead and complexity become larger and larger. [0003] The traditional cust...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/08
CPCG06N3/08
Inventor 刘波朱文涛沈泽昱黄乐朋李焱孙煜昊杨军
Owner SOUTHEAST UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products