Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Hardware Acceleration System Oriented to LSTM Network Model

A network model and hardware acceleration technology, applied in biological neural network models, climate sustainability, neural architecture, etc., can solve the problems of lack of research results and general optimization effect at the computing level, so as to achieve high resource occupation and improve on-chip memory access Efficiency, the effect of reducing memory access time

Active Publication Date: 2022-05-20
HUAZHONG UNIV OF SCI & TECH
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although this method realizes the acceleration of the LSTM model system, it mainly optimizes the bottleneck of storage resource consumption, and the optimization effect at the computing level is average.
[0006] In summary, although the research on LSTM model acceleration tasks for hardware platforms is of great significance, the research results in this area are still relatively scarce, and an LSTM network acceleration design with good versatility and excellent acceleration effects is urgently needed

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Hardware Acceleration System Oriented to LSTM Network Model
  • A Hardware Acceleration System Oriented to LSTM Network Model
  • A Hardware Acceleration System Oriented to LSTM Network Model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0072] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention. In addition, the technical features involved in the various embodiments of the present invention described below can be combined with each other as long as they do not constitute a conflict with each other.

[0073] Such as figure 1 Shown is an LSTM network-oriented hardware acceleration architecture disclosed in the embodiment of the present invention, wherein the "off-chip memory" is an external off-chip storage device, and the "on-chip processing unit" is the main part of the architecture in this application, mainly including :

[0074] Network reasoning computing core: As a computing accelerato...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a hardware acceleration system oriented to an LSTM network model, and belongs to the technical field of deep learning hardware acceleration. The invention discloses a hardware acceleration system oriented to a deep learning long short-term memory (LSTM) network model, which includes a network reasoning calculation core and a network data storage core. As the computing accelerator of the LSTM network model, the network reasoning computing core deploys computing units according to the network model to realize the computing acceleration of convolution operations, matrix point multiplication, matrix addition, activation functions and other computing units; the network data storage core serves as the data of the LSTM network model The cache and interaction controller deploys the on-chip cache unit according to the network model to realize the data interaction link between the computing core and the off-chip memory. The invention improves the calculation parallelism of the LSTM network model, reduces the processing delay, reduces the memory access time, and improves the memory access efficiency.

Description

technical field [0001] The invention belongs to the field of deep learning hardware acceleration, and more specifically relates to a hardware acceleration system oriented to an LSTM network model. Background technique [0002] Long Short-Term Memory (LSTM), as a variant of deep learning Recurrent Neural Network (RNN), is widely used in sequence model processing tasks such as speech recognition, natural language processing, and image compression. LSTM effectively solves the problem of gradient explosion and gradient disappearance in the RNN training process by introducing a gating mechanism and a state value for storing long-term and short-term historical information, and relatively greatly increases its computational complexity and space complexity. Its intensive calculation and memory access limit its application on the embedded hardware platform with limited resources. Therefore, it is a very meaningful research to design and accelerate the optimization of the LSTM model f...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06N3/04G06N3/06G06N3/08
CPCG06N3/08G06N3/06G06N3/044G06N3/045Y02D10/00
Inventor 钟胜王煜颜露新邹旭陈立群徐文辉张思宇颜章
Owner HUAZHONG UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products