Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Design method of hardware accelerator based on LSTM recursive neural network algorithm on FPGA platform

A recursive neural network and hardware accelerator technology, applied in the design field of LSTM recursive neural network hardware accelerator, can solve problems such as GPU limitations, and achieve high prediction accuracy, accelerated prediction process, and low power consumption

Inactive Publication Date: 2018-05-29
SUZHOU INST FOR ADVANCED STUDY USTC
View PDF1 Cites 64 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the high energy consumption of GPU has caused certain limitations to its application.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Design method of hardware accelerator based on LSTM recursive neural network algorithm on FPGA platform
  • Design method of hardware accelerator based on LSTM recursive neural network algorithm on FPGA platform
  • Design method of hardware accelerator based on LSTM recursive neural network algorithm on FPGA platform

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0065] The field programmable gate array platform in the embodiment of the present invention refers to a computing system that simultaneously integrates a general purpose processor (General Purpose Processor, referred to as "GPP") and a field programmable gate array (Field Programmable GateArrays, referred to as "FPGA") chip , wherein, the data path between FPGA and GPP can adopt PCI-E bus protocol, AXI bus protocol, etc. The data path in the drawings of the embodiments of the present invention is illustrated by using the AXI bus protocol as an example, but the present invention is not limited thereto.

[0066] figure 1 It is a flowchart of a design method 100 of an FPGA-based LSTM recurrent neural network hardware accelerator according to an embodiment of the present invention. The method 100 includes:

[0067] S110, using Tensorflow to construct an LSTM neural network, and train parameters of the neural network;

[0068] S120, using compression means to compress LSTM netw...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a method for accelerating an LSTM neural network algorithm on an FPGA platform. The FPGA is a field-programmable gate array platform and comprises a general processor, a field-programmable gate array body and a storage module. The method comprises the following steps that an LSTM neural network is constructed by using a Tensorflow pair, and parameters of the neural networkare trained; the parameters of the LSTM network are compressed by adopting a compression means, and the problem that storage resources of the FPGA are insufficient is solved; according to the prediction process of the compressed LSTM network, a calculation part suitable for running on the field-programmable gate array platform is determined; according to the determined calculation part, a softwareand hardware collaborative calculation mode is determined; according to the calculation logic resource and bandwidth condition of the FPGA, the number and type of IP core firmware are determined, andacceleration is carried out on the field-programmable gate array platform by utilizing a hardware operation unit. A hardware processing unit for acceleration of the LSTM neural network can be quicklydesigned according to hardware resources, and the processing unit has the advantages of being high in performance and low in power consumption compared with the general processor.

Description

technical field [0001] The invention relates to the field of computer hardware acceleration, in particular to a design method of an FPGA-based LSTM recursive neural network hardware accelerator. Background technique [0002] LSTM (Long Short-Term Memory) neural network is a kind of recurrent neural network (RNN), which is widely used in sequence processing applications. It solves the traditional problem by replacing the neurons in the ordinary RNN network with LSTM components. The long-term dependence problem in RNN network training. Since the LSTM component representing a neuron contains four gates, each gate must be connected to the input node, and the value received by each gate must also undergo a series of operations to obtain the output value of the LSTM component, so when the neural network When the number of LSTM components contained in the hidden layer in , the amount of computational tasks inside the neural network and the power consumption will be very large. Th...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06N3/04G06N3/063G06N3/08
CPCG06N3/063G06N3/08G06N3/044G06N3/045
Inventor 李曦周学海王超陈香兰
Owner SUZHOU INST FOR ADVANCED STUDY USTC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products