Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Neural network in-memory computing device and acceleration method based on communication lower bound

A neural network and computing device technology, applied in biological neural network models, neural architecture, energy-saving computing, etc., can solve problems such as inability to guarantee the optimality of data flow schemes, lack of theoretical analysis support, etc., and achieve the effect of reducing the amount of data access

Active Publication Date: 2022-05-31
ZHEJIANG UNIV
View PDF7 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Many studies have proposed neural network acceleration devices and corresponding data flow schemes under the in-memory computing architecture, such as fixed weights, fixed input feature maps, fixed rows-fixed input feature maps, etc. However, these works are based on intuitive observations. However, the optimality of the data flow scheme cannot be guaranteed, and there is a lack of theoretical analysis support

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Neural network in-memory computing device and acceleration method based on communication lower bound
  • Neural network in-memory computing device and acceleration method based on communication lower bound
  • Neural network in-memory computing device and acceleration method based on communication lower bound

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0045] Embodiment 1. Neural network in-memory computing device based on communication lower bound, such as Figure 1-5 As shown, it includes a neural network acceleration device 100, a processor 200 and an external memory 300. The neural network acceleration device 100 is signal-connected to the processor 200 and the external memory 300 respectively; the processor 200 is used to control the flow of the neural network acceleration device 100 and Calculation of some special layers (such as Softmax layer, etc.) is performed; the external memory 300 stores the weight data, input feature map data and output feature map data of each layer required in the neural network calculation process.

[0046] The processor 200 and the external memory 300 are signal-connected to each other, and the processor 200 and the external memory 300 belong to the prior art, so they will not be described in detail.

[0047] The neural network acceleration apparatus 100 includes an input / output port 102, a...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to the fields of neural network algorithms and computer hardware design, and specifically proposes a neural network memory computing device and acceleration method based on a communication lower bound. The invention discloses a neural network in-memory computing device based on a communication lower bound, which includes a processor, an external memory and a neural network acceleration device. The invention also simultaneously discloses an acceleration method performed by using the neural network in-memory computing device based on the communication lower bound. The present invention takes off-chip-on-chip communication lower bound analysis as theoretical support, uses output feature map multiplexing and convolution window multiplexing, balances weight reuse and input feature map multiplexing, and proposes a neural network acceleration device under an in-memory computing architecture and Corresponding data flow scheme, thereby reducing off-chip-on-chip data access.

Description

technical field [0001] The invention relates to the field of neural network algorithm and computer hardware design, and in particular provides a neural network in-memory computing device and acceleration method based on a communication lower bound. Background technique [0002] With the rapid development of neural network technology, a large number of neural network algorithms have emerged in image processing, medical diagnosis, automatic driving and other applications, showing great advantages. At the same time, in order to obtain better performance, the number of layers and scale of the neural network gradually increases, and the number of weight parameters also gradually increases, resulting in a large increase in the amount of data movement in the calculation of the neural network algorithm. In order to enable neural network algorithms to be applied in practical scenarios, due to bandwidth limitations, delay requirements, power consumption limitations, privacy protection...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06N3/04G06N3/063
CPCG06N3/063G06N3/047G06N3/045Y02D10/00
Inventor 陈敏珍刘鹏王维东周迪
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products