Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

FPGA (field programmable gate array)-based universal fixed-point-number neural network convolution accelerator hardware structure

A neural network and hardware structure technology, applied in the field of electronic information and deep learning, can solve the problems of complex read and write control logic, difficult FPGA design, and non-reusable logic, and achieve accurate data accuracy, high computing speed and portability. The effect of high flexibility and low design complexity

Inactive Publication Date: 2017-11-24
SOUTHEAST UNIV WUXI INST OF TECH INTEGRATED CIRCUITS +1
View PDF5 Cites 51 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Only FPGA-side logic is used to control the reading and writing of a large amount of data, and a large number of memory accesses will be performed. The reading and writing control logic is extremely complex, and the increase in power consumption is inevitable. At the same time, these logics are not reusable, that is, for each layer in the network , need to design control logic specially, bring many difficulties to FPGA design
The CPU only needs to operate on the memory to process read and write, which brings a significant reduction in design complexity.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • FPGA (field programmable gate array)-based universal fixed-point-number neural network convolution accelerator hardware structure
  • FPGA (field programmable gate array)-based universal fixed-point-number neural network convolution accelerator hardware structure
  • FPGA (field programmable gate array)-based universal fixed-point-number neural network convolution accelerator hardware structure

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0023] Embodiments of the present invention will be further described below in conjunction with the accompanying drawings. Examples of the embodiments are shown in the accompanying drawings, wherein the same or similar reference numerals represent the same or similar elements or elements with similar functions. The embodiments described below by referring to the figures are exemplary and are intended to explain the present invention and should not be construed as limiting the present invention.

[0024] The hardware structure of the FPGA-based general-purpose fixed-point number neural network convolution accelerator proposed according to the embodiment of the present invention is described below with reference to the accompanying drawings. refer to figure 1 As shown, the FPGA-based general-purpose fixed-point number neural network convolution accelerator hardware structure includes: direct access controller, AXI4 bus interface protocol, high parallel buffer area (highly parall...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an FPGA (field programmable gate array)-based universal fixed-point-number neural network convolution accelerator hardware structure which comprises a universal AXI4 high-speed bus interface, a universal GPIO (general purpose input / output) interface, a universal convolver, a universal read-write control unit, a universal state controller and a universal convolution result buffer. Universal memory hardware is provided, and high-parallel read-write operations are supported; the universal convolver can be used for accurate configuration of fixed point numbers, configuration of convolution operation levels and high-parallel convolution operation in coordination with high-parallel read-write operations after data storage; the universal read-write control unit includes ram, rom and Fifo read-write control logics and address generating logics; the universal state controller makes corresponding unit operating reactions aiming at a convolution layer and read-write and calculation processes to control an integral calculation process; the universal convolution result buffer is used for high-speed parallel caching and sending of a processing result to a bus according to a convolution result sectional accumulation method. By verification in application to Yolo algorithm based human face detection and CNN (convolutional neural network) based human face recognition, high operating speed and high data precision are embodied.

Description

technical field [0001] The present invention relates to the technical fields of electronic information and deep learning, and in particular to a general-purpose fixed-point number neural network convolution accelerator hardware structure based on FPGA (Field-Programmable Gate Array, Field Programmable Gate Array). Background technique [0002] Since Hinton and others proposed deep learning in 2006, artificial intelligence solutions based on Convolutional Neural Network (CNN) have become more and more abundant. Some mobile platforms such as drones and robots use convolutional neural networks. Build applications such as speech recognition, face recognition, image recognition, and more. Due to the huge amount of calculation of the convolutional neural network, GPU (Graphic Processing Unit, graphics processing unit), FPGA, ASIC (Application Specific Integrated Circuit, application specific integrated circuit) and other hardware computing support, such as represented by Nvidia, a...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/063G06F5/06
CPCG06F5/06G06N3/063
Inventor 陆生礼韩志庞伟李硕周世豪沈志源
Owner SOUTHEAST UNIV WUXI INST OF TECH INTEGRATED CIRCUITS
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products