Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Neural network hardware accelerator

A hardware accelerator and neural network technology, applied in the field of artificial intelligence, can solve problems such as poor computing power ratio, no advantage in power consumption efficiency of neural network hardware accelerators, and less optimization of RNN neural network calculations.

Pending Publication Date: 2020-11-10
SHENZHEN DAPU MICROELECTRONICS CO LTD
View PDF0 Cites 10 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although these neural network hardware accelerators can also be used for RNN (ie, Recurrent Neural Network, cyclic neural network) neural networks, there are few calculation optimizations for RNN neural networks, and the calculation efficiency ratio is too poor
That is to say, if the same existing hardware is used to calculate CNN, there will be little waste of resources, but if it is used to calculate RNN, there will be a lot of waste of resources.
More waste of resources means lower power consumption and computing power. In this way, the architecture of neural network hardware accelerators has no advantages in terms of power consumption efficiency and cost.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Neural network hardware accelerator
  • Neural network hardware accelerator
  • Neural network hardware accelerator

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0044] The following will clearly and completely describe the technical solutions in the embodiments of the application with reference to the drawings in the embodiments of the application. Apparently, the described embodiments are only some of the embodiments of the application, not all of them. Based on the embodiments in this application, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the scope of protection of this application.

[0045] Currently, the architecture of neural network hardware accelerators is biased towards CNN optimization. Although these neural network hardware accelerators can also be used for RNN neural networks, there is little calculation optimization for RNN neural networks, and the calculation efficiency ratio is too poor, resulting in a lot of waste of resources. More waste of resources means lower power consumption and computing power. In this way, the architecture of neural network h...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a neural network hardware accelerator. An assembly line architecture of the hardware accelerator comprises an instruction obtaining module used for acquiring instructions an instruction decoding module used for performing instruction decoding operation; a semi-precision floating point operation module used for carrying out one-dimensional vector operation; an activation function calculation module used for calculating an activation function in a table lookup mode; a floating point post-processing unit used for performing floating point operation on the data calculated by the activation function; and a caching module used for caching intermediate data in the neural network algorithm implementation process. Register files which are distributed on an assembly line andlocated on the same level as the instruction decoding module are used for temporarily storing related instructions, data and addresses in the neural network algorithm implementation process. The hardware resource utilization rate of the hardware accelerator in the RNN neural network algorithm implementation process can be greatly improved, so that the unit power consumption efficiency of the RNN neural network algorithm with the unit calculation amount in unit time when the RNN neural network algorithm is operated through the hardware accelerator is improved.

Description

technical field [0001] The present application relates to the technical field of artificial intelligence, in particular to a neural network hardware accelerator. Background technique [0002] At present, hardware accelerators for neural networks include Google's TPU, NVDIA's NVDLA, Cambrian and so on. The mainstream neural network hardware accelerator has done a lot of calculation optimization for the CNN (Convolutional Neural Networks, Convolutional Neural Network) network, and in the process of hardware calculation, it has done convolution operations for convolution and convolution kernels of different sizes. Targeted optimization. [0003] It can be seen that the current architecture of the entire neural network hardware accelerator is biased towards CNN optimization, and this part is indeed the part that requires the most computing power in the neural network algorithm. Although these neural network hardware accelerators can also be used for RNN (ie, Recurrent Neural N...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/063G06N3/04G06F9/30G06F9/38
CPCG06N3/063G06F9/30141G06F9/3013G06F9/30145G06F9/3867G06N3/045G06F9/38G06N3/04G06F9/30
Inventor 李文江黄运新冯涛徐斌王岩李卫军
Owner SHENZHEN DAPU MICROELECTRONICS CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products