Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Design method for deploying and optimizing operator library on FPGA and DSP

A design method and operator technology, applied in software deployment, biological neural network models, neural architecture, etc., can solve problems such as limited hardware storage resources, difficulty in applying edge hardware devices, and high number of fine-grained operator memory accesses , to achieve the effect of reducing the time for manual code porting

Pending Publication Date: 2021-12-10
HANGZHOU INNOVATION RES INST OF BEIJING UNIV OF AERONAUTICS & ASTRONAUTICS +1
View PDF6 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0008] The purpose of the present invention is to propose an optimized operator library design method deployed on FPGA and DSP in order to solve the shortcomings of the above-mentioned prior art, which can solve the problem that the operators in the existing deep learning framework are too high-level and difficult to apply To the problem of edge hardware devices; to solve the problem that the fusion operator library is difficult to achieve efficient parallel computing, and to solve the problem of excessive number of fine-grained operator memory accesses and limited hardware storage resources

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Design method for deploying and optimizing operator library on FPGA and DSP
  • Design method for deploying and optimizing operator library on FPGA and DSP
  • Design method for deploying and optimizing operator library on FPGA and DSP

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0038] In order to make the technical means, creative features, goals and effects achieved by the present invention easy to understand, the present invention will be further described below in conjunction with specific embodiments.

[0039] In the description of the present invention, it should be noted that the terms "upper", "lower", "inner", "outer", "front end", "rear end", "both ends", "one end", "another end" The orientation or positional relationship indicated by etc. is based on the orientation or positional relationship shown in the drawings, and is only for the convenience of describing the present invention and simplifying the description, rather than indicating or implying that the referred device or element must have a specific orientation, use a specific Azimuth configuration and operation, therefore, should not be construed as limiting the invention. In addition, the terms "first" and "second" are used for descriptive purposes only, and should not be understood ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a design method for deploying and an optimizing operator library on an FPGA and a DSP. The method comprises the following steps: designing an underlying hardware operator library corresponding to an operator library of a high-level deep learning framework, and abstractly packaging operators in a lightweight network to form a fusion operator library; packaging the fusion operator library into a parallel operator library with hardware characteristics by adopting a preset segmentation strategy according to the fusion operator library in combination with computing resources of hardware; and combining the parallel operator library with a rearrangement strategy. According to the method, technical support is provided for quickly completing deployment and optimization of the deep learning network on resource-limited side-end equipment such as the DSP and the FPGA. The core of the method is to construct an underlying deep learning operator library with high practicability and high mobility. The operator library is combined with hardware characteristics and fused with efficient strategies such as heuristic segmentation and data flow rearrangement. And the basic requirements of the neural network model on the deployment of the FPGA and the multi-core DSP can be met.

Description

technical field [0001] The present invention relates to the technical field of rapid deployment and optimization of neural networks based on the characteristics of different hardware platforms, in particular to an operator library design method for deployment and optimization on FPGA and DSP, and more specifically to a lightweight neural network-oriented An optimized operator library design method for rapid deployment on FPGA and DSP. Background technique [0002] In recent years, deep learning has developed rapidly in image recognition, natural language processing, speech recognition and other fields and has shown amazing capabilities. There are more and more requirements for the deployment of deep learning models in various fields, especially in aerospace, autonomous driving and other industrial and manufacturing fields, which require higher real-time performance of deep learning algorithm deployment, but are limited by deep learning. The parameters of the network model a...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F8/60G06N3/04G06N3/063
CPCG06F8/60G06N3/063G06N3/045
Inventor 姜宏旭田方正李波张润华李晓宾胡宗琦谢传良
Owner HANGZHOU INNOVATION RES INST OF BEIJING UNIV OF AERONAUTICS & ASTRONAUTICS
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products