Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Neural network in-memory computing device based on communication lower bound and acceleration method

A neural network and computing device technology, applied in biological neural network models, energy-saving computing, neural architecture, etc., can solve problems such as inability to guarantee the optimality of data flow schemes, lack of theoretical analysis support, etc., and achieve the effect of reducing the amount of data access

Active Publication Date: 2021-06-29
ZHEJIANG UNIV
View PDF7 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Many studies have proposed neural network acceleration devices and corresponding data flow schemes under the in-memory computing architecture, such as fixed weights, fixed input feature maps, fixed rows-fixed input feature maps, etc. However, these works are based on intuitive observations. However, the optimality of the data flow scheme cannot be guaranteed, and there is a lack of theoretical analysis support

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Neural network in-memory computing device based on communication lower bound and acceleration method
  • Neural network in-memory computing device based on communication lower bound and acceleration method
  • Neural network in-memory computing device based on communication lower bound and acceleration method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0045] Embodiment 1. Neural network in-memory computing device based on communication lower bound, such as Figure 1-5 As shown, it includes a neural network acceleration device 100, a processor 200 and an external memory 300, and the neural network acceleration device 100 is connected to the processor 200 and the external memory 300 respectively; the processor 200 is used to control the process of the neural network acceleration device 100 and Carry out the calculation of some special layers (such as Softmax layer, etc.); the external memory 300 stores the weight data required in the neural network calculation process, the input feature map data, and the output feature map data of each layer in the layer-by-layer calculation process.

[0046] The processor 200 and the external memory 300 are signal-connected to each other, and the processor 200 and the external memory 300 belong to the prior art, so a detailed description thereof will not be given.

[0047] The neural network...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to the field of neural network algorithms and computer hardware design, in particular to a neural network in-memory computing device based on a communication lower bound and an acceleration method. The invention discloses a neural network in-memory computing device based on a communication lower bound, which comprises a processor, an external memory and a neural network acceleration device. The invention further discloses an acceleration method using the neural network in-memory computing device based on the communication lower bound. According to the method, off-chip-on-chip communication lower bound analysis is taken as a theoretical support, output feature map multiplexing, convolution window multiplexing, balance weight multiplexing and input feature map multiplexing are utilized, and a neural network acceleration device under an in-memory computing architecture and a corresponding data flow scheme are provided, so that off-chip-on-chip data access amount is reduced.

Description

technical field [0001] The invention relates to the fields of neural network algorithms and computer hardware design, and specifically proposes a neural network memory computing device and acceleration method based on a communication lower bound. Background technique [0002] With the rapid development of neural network technology, a large number of neural network algorithms have emerged that have performed very well in applications such as image processing, medical diagnosis, and automatic driving, showing huge advantages. At the same time, in order to obtain better performance, the number and scale of the neural network gradually increase, and the number of weight parameters also gradually increases, resulting in a significant increase in the amount of data movement in the calculation of the neural network algorithm. In order to enable neural network algorithms to be applied in actual scenarios, due to bandwidth constraints, delay requirements, power consumption constraint...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/04G06N3/063
CPCG06N3/063G06N3/047G06N3/045Y02D10/00
Inventor 陈敏珍刘鹏王维东周迪
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products