Unlock instant, AI-driven research and patent intelligence for your innovation.

A FPGA-based binary neural network acceleration method and system

A binary neural network and acceleration system technology, which is applied in FPGA-based binary neural network acceleration method and system field, can solve the problems of computing resource consumption, small capacity, limited computing power, etc., and achieve the improvement of convolution computing speed , to ensure the accuracy of the model and improve the efficiency of calculation

Active Publication Date: 2022-05-20
武汉魅瞳科技有限公司
View PDF3 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] However, all such mobile and embedded computing devices can only provide limited computing power and on-chip storage with a small capacity.
As the model structure of the convolutional neural network becomes more and more complex, the number of model layers becomes deeper, and the amount of model parameters increases, the deployment of convolutional neural networks on mobile and embedded terminals becomes more and more complex. difficulty
The huge amount of calculations all use 32bit floating-point numbers as operands to run on lightweight chips, which undoubtedly consumes a huge amount of computing resources, and it is also difficult to achieve better real-time results

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A FPGA-based binary neural network acceleration method and system
  • A FPGA-based binary neural network acceleration method and system
  • A FPGA-based binary neural network acceleration method and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0039] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.

[0040] In addition, the technical features involved in the various embodiments of the present invention described below can be combined with each other as long as they do not constitute a conflict with each other. The present invention will be further described in detail below in combination with specific embodiments.

[0041] figure 1 It is a structural schematic diagram of an FPGA-based binary neural network acceleration system according to an embodiment of the present invention. Such as figure 1 As shown, a binary neural network acceleration system based on FPGA, the system includes a convolu...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a binary neural network acceleration system based on FPGA, which utilizes a convolution kernel parameter acquisition module formed by FPGA, a binary convolution neural network structure and a cache module, the cache module is an on-chip memory of FPGA, and each module By obtaining the input feature map of the picture to be processed, obtaining the convolution calculation logic rules and corresponding binary convolution calculation, the FPGA traverses the convolution calculation of multiple threads according to the convolution calculation logic rules, and obtains the output features of the image to be processed Graph data, through this overall architecture, all the calculations of each layer in the binary neural network are offloaded to the on-chip memory, without relying on the interaction between off-chip memory and on-chip memory, thereby reducing the communication cost between memories and greatly improving Computational efficiency improves the detection speed of the image to be detected.

Description

technical field [0001] The invention belongs to the field of image processing, and in particular relates to an FPGA-based binary neural network acceleration method and system. Background technique [0002] Significant advances in artificial intelligence technology have begun to benefit all aspects of human life. From a vacuum robot in your home to an entire set of smart production equipment in a factory, many tasks in the world have already achieved a high degree of automation. Deep learning plays a pivotal role in this great technological revolution, and it has a wide range of applications in face recognition, object detection, image processing, and other fields. The main algorithm used is the convolutional neural network. This deep learning algorithm with better performance has been deployed in a large number of PCs, mobile phones and embedded dedicated accelerators to achieve various intelligent computing tasks. A better acceleration effect has been obtained. [0003] ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06N3/04G06N3/063
CPCG06N3/063G06N3/045Y02D10/00
Inventor 李开邹复好祁迪
Owner 武汉魅瞳科技有限公司