neural network acceleration method based on cooperative processing of multiple FPGAs

A neural network and collaborative processing technology, applied in the field of neural network optimization, can solve problems such as reducing neural network processing performance, and achieve the effect of improving energy efficiency ratio

Active Publication Date: 2019-05-17
SHANDONG INSPUR SCI RES INST CO LTD
View PDF3 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

One layer of the existing neural network model cannot be perfectly implemented in parallel on one FPGA, so the processing performance of the neural network is reduced when serial processing is required, and the pipeline layered implementation of multiple FPGAs can greatly improve Processing performance of neural networks

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • neural network acceleration method based on cooperative processing of multiple FPGAs
  • neural network acceleration method based on cooperative processing of multiple FPGAs

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0024] The present invention provides a neural network acceleration method based on multi-block FPGA cooperative processing, establishes a neural network acceleration board, and arranges an SOC chip and an FPGA on the acceleration board. The SOC chip includes a ZYNQ chip, and the ZYNQ chip is interconnected with each FPGA.

[0025] ZYNQ chip decomposes the parameters of the network model according to the level according to the complexity, delay requirements and throughput requirements of the network model of the neural network, divides the pipeline series of FPGA according to the level of parameter decomposition, and sends parameters according to the level of parameter decomposition To the FPGA of the corresponding pipeline level, control the FPGA activated by each pipeline level according to the neural network model, until the FPGA with the pipeline level of the last level completes the data processing.

[0026] Simultaneously provide a kind of neural network accelerator based...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a neural network acceleration method based on multi-block FPGA cooperative processing, and relates to the field of neural network optimization. Establishment of neural networkacceleration board card, an SOC chip and an FPGA are arranged on the acceleration board card; the SOC chip comprises a ZYNQ chip; the ZYNQ chip is interconnected with each FPGA; A ZYNQ chip is based on the complexity of a network model of a neural network. delay requirements and throughput requirements; decomposing the parameters of the network model according to layers; and dividing the FPGA flowseries according to the hierarchy of parameter decomposition, issuing parameters to the FPGA of the corresponding flow series according to the hierarchy of parameter decomposition, and controlling the FPGA started by each flow series according to the neural network model until the FPGA of which the flow series is the last level completes data processing.

Description

technical field [0001] The invention discloses a neural network acceleration method based on multi-block FPGA cooperative processing, and relates to the field of neural network optimization. Background technique [0002] Neural network (Neural Networks, NN) is a complex network system formed by a large number of simple processing units (called neurons) that are widely connected to each other. It reflects many basic features of human brain function and is a highly complex network. Nonlinear dynamical learning systems. Neural network has large-scale parallelism, distributed storage and processing, self-organization, self-adaptation and self-learning capabilities, and is especially suitable for dealing with imprecise and fuzzy information processing problems that need to consider many factors and conditions at the same time. One layer of the existing neural network model cannot be perfectly implemented in parallel on one FPGA, so the processing performance of the neural networ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/063G06F15/78
CPCY02D10/00
Inventor 秦刚姜凯于治楼
Owner SHANDONG INSPUR SCI RES INST CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products