Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A sparse convolutional neural network accelerator and an implementation method

A technology of convolutional neural network and implementation method, applied in the field of sparse convolutional neural network accelerator and implementation, can solve problems such as large consumption of hardware resources, and achieve the effects of good compression effect, simplified complexity, and simple encoding and decoding

Active Publication Date: 2019-04-16
XI AN JIAOTONG UNIV
View PDF1 Cites 34 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Most of the existing neural network model research focuses on improving the accuracy of network recognition. Even the lightweight network model ignores the computational complexity of hardware acceleration, and the floating-point data representation leads to excessive consumption of hardware resources.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A sparse convolutional neural network accelerator and an implementation method
  • A sparse convolutional neural network accelerator and an implementation method
  • A sparse convolutional neural network accelerator and an implementation method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0045] The technical solutions of the present application will be clearly and completely described below in conjunction with the accompanying drawings.

[0046] The connection weight in the present invention is also called weight.

[0047] see Figure 4 , the neural network accelerator of the present invention includes: off-chip DRAM, neuron input buffer, neuron decoding unit, neuron encoding unit, neuron output buffer, weight input buffer, weight decoding unit, neuron on-chip global Buffer, weight on-chip global buffer, PE computing unit array, activation unit and pooling unit.

[0048] Off-chip DRAM for storing compressed neuron and connection weight data.

[0049] The neuron input buffer is used to cache the compressed neuron data read from the off-chip DRAM and transmit it to the neuron decoding unit.

[0050] The neuron decoding unit is used to decode the compressed neuron data, and transmit the decoded neuron to the neuron on-chip global buffer.

[0051] The weight i...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a sparse convolutional neural network accelerator and a realization method, and the method comprises the steps: reading the connection weight of a sparse network in an off-chip DRAM into a weight input buffer area, decoding the connection weight through a weight decoding unit, and storing the connection weight in a weight on-chip global buffer area; Reading neurons into aneuron input buffer area, decoding the read neurons through a neuron decoding unit, and storing the decoded neurons in a neuron on-chip global buffer area; Determining a calculation mode of the PE calculation unit array according to the configuration parameters of the current layer of the neural network, and sending neurons and connection weights which are arranged after decoding to a PE calculation unit; Calculating the product of the neuron and the connection weight; In the accelerator, the multipliers in the PE units are all replaced by the shifters, and all basic modules can be configuredaccording to network calculation and hardware resources, so that the accelerator has the advantages of high speed, low power consumption, small resource occupation and high data utilization rate.

Description

technical field [0001] The invention belongs to the technical field of deep neural network acceleration computing, and relates to a sparse convolutional neural network accelerator and an implementation method. Background technique [0002] The superior performance of Deep Neural Network (DNN) comes from its ability to use statistical learning to obtain an efficient representation of the input space from large amounts of data and extract high-level features from raw data. However, these algorithms are computationally intensive. As the number of network layers continues to increase, DNN has higher and higher requirements for storage resources and computing resources, making it difficult to deploy on embedded devices with limited hardware resources and tight power budgets. DNN model compression technology is beneficial to reduce network model memory usage, reduce computational complexity and system power consumption, etc. On the one hand, it can improve the operating efficiency...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06N3/063G06N3/04
CPCG06N3/063G06N3/045
Inventor 刘龙军李宝婷孙宏滨梁家华任鹏举郑南宁
Owner XI AN JIAOTONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products