Supercharge Your Innovation With Domain-Expert AI Agents!

A neural network processor, design method, and chip based on data compression

A data compression, neural network technology, applied in biological neural network models, physical implementation, etc., can solve problems such as speeding up computing speed, and achieve the effect of improving computing speed and operating energy efficiency

Active Publication Date: 2019-07-30
INST OF COMPUTING TECH CHINESE ACAD OF SCI
View PDF3 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Literature "Albericio J, Judd P, Hetherington T, et al.Cnvlutin: ineffective-neuron-free deep neural network computing[C] / / Computer Architecture(ISCA),2016ACM / IEEE 43rd Annual International Symposium on.IEEE,2016:1 -13." By providing large-scale storage units on the chip to achieve large-scale parallel computing and based on this, the compression of data elements is realized, but this method relies on large-scale on-chip storage units to meet its parallel computing needs, Not suitable for embedded devices; "Chen Y H, Emer J, Sze V.Eyeriss: A Spatial Architecture for Energy-Efficient Dataflow for Convolutional Neural Networks[J].2016." Data reuse and power by sharing data and weights The gating method closes the calculation of data 0, which can effectively improve energy efficiency, but this method can only reduce the power consumption of the operation and cannot skip elements with a value of 0 to speed up the calculation.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A neural network processor, design method, and chip based on data compression
  • A neural network processor, design method, and chip based on data compression
  • A neural network processor, design method, and chip based on data compression

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0036] The inventor found in the research of neural network processors that there are a large number of data elements with a value of 0 in the process of neural network calculations. After data operations such as multiplication and addition, such elements have no numerical impact on the calculation results, but the neural network When the network processor processes these data elements, it will occupy a large amount of on-chip storage space, consume redundant transmission resources and increase the running time, so it is difficult to meet the performance requirements of the neural network processor.

[0037] After analyzing the calculation structure of the existing neural network processor, the inventor finds that the data elements of the neural network can be compressed to achieve the purpose of accelerating the operation speed and reducing energy consumption. The prior art provides the basic architecture of a neural network accelerator. The present invention proposes a data c...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention proposes a neural network processor, design method and chip based on data compression. The processor includes at least one storage unit for storing operation instructions and data participating in calculations; at least one storage unit controller for all The storage unit performs control; at least one computing unit is used to perform computing operations of the neural network; a control unit is connected to the storage unit controller and the computing unit, and is used to obtain the storage via the storage unit controller The unit stores instructions, and parses the instructions to control the computing unit; at least one data compression unit is used to compress the data participating in the calculation according to the data compression storage format, wherein each of the data compression units is related to the data compression unit. The computing units are connected. The invention reduces the occupation of data resources in the neural network processor, increases the computing speed, and improves energy efficiency.

Description

technical field [0001] The invention relates to the field of hardware acceleration for neural network model calculation, in particular to a data compression-based neural network processor, design method and chip. Background technique [0002] Deep learning technology has developed rapidly in recent years. Deep neural networks, especially convolutional neural networks, have achieved a wide range of applications in image recognition, speech recognition, natural language understanding, weather prediction, gene expression, content recommendation, and intelligent robots. [0003] The deep network structure obtained by deep learning is an operational model, which contains a large number of data nodes, each data node is connected to other data nodes, and the connection relationship between each node is represented by weight. With the continuous improvement of the complexity of the neural network, the neural network technology has many problems in the actual application process, su...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06N3/06
CPCG06N3/06Y02D10/00
Inventor 韩银和许浩博王颖
Owner INST OF COMPUTING TECH CHINESE ACAD OF SCI
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More