FPGA acceleration device, method and system for realizing neural network

A neural network and acceleration device technology, applied in the field of deep learning, can solve problems such as GPU high energy consumption limitation, and achieve the effect of low power consumption

Inactive Publication Date: 2019-03-19
SHENZHEN LINTSENSE TECH CO LTD
View PDF9 Cites 10 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] The deep network structure obtained by deep learning is an operation model, which contains a large number of data nodes, each data node is connected to other data nodes, and the connection relationship between each node is represented by weight. The mainstream neural network processing hardware usually adopts general-purpose Processor (CPU) or graphi

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • FPGA acceleration device, method and system for realizing neural network
  • FPGA acceleration device, method and system for realizing neural network
  • FPGA acceleration device, method and system for realizing neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0014] The subject matter described herein will now be discussed with reference to example implementations. It should be understood that the discussion of these implementations is only to enable those skilled in the art to better understand and realize the subject matter described herein, and is not intended to limit the protection scope, applicability or examples set forth in the claims. Changes may be made in the function and arrangement of elements discussed without departing from the scope of the disclosure. Various examples may omit, substitute, or add various procedures or components as needed. For example, the methods described may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with respect to some examples may also be combined in other examples.

[0015] As used herein, the term "comprising" and its variants represent open terms meaning "including but not limited to". The...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an FPGA acceleration device, method and system for realizing a neural network, and the device includes at least one storage unit for storing operation instructions, operation data, and weight data of n sub-networks constituting the neural network,, and n is an integer greater than 1; multiple computing units used for executing vector multiplication and addition operation inneural network calculation according to the operation instruction, the operation data, the weight data and the execution sequence j of the n sub-networks, wherein the initial value of j is 1, and thefinal calculation result of the sub-networks with the execution sequence j serves as input of the sub-networks with the execution sequence j + 1; and a control unit connected with the at least one storage unit and the plurality of computing units and is used for obtaining the operation instruction through the at least one storage unit and analyzing the operation instruction to control the plurality of computing units. The FPGA is used for accelerating the operation process of the neural network, and compared with a general processor and a graphic processor, the method has the advantages of being high in performance and low in power consumption.

Description

technical field [0001] This application relates to the field of deep learning technology, and in particular to FPGA acceleration devices, methods and systems for realizing neural networks. Background technique [0002] With the continuous development of machine learning technology, deep neural network has become the best solution for cognition and recognition tasks, and has attracted widespread attention in the fields of recognition detection and computer vision, especially in the field of image recognition, deep neural network has reached or even surpassed Human recognition accuracy. [0003] The deep network structure obtained by deep learning is an operation model, which contains a large number of data nodes, each data node is connected to other data nodes, and the connection relationship between each node is represented by weight. The mainstream neural network processing hardware usually adopts general-purpose Processor (CPU) or graphics processing unit (GPU), among the...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06N3/063G06N3/04
CPCG06N3/049G06N3/063
Inventor 金玲玲饶东升
Owner SHENZHEN LINTSENSE TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products