Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Convolutional neural network accelerator based on CPU-FPGA memory sharing

A technology of CPU-FPGA and convolutional neural network, which is applied in the field of neural network, can solve the problems of limited power consumption, and achieve the effect of high throughput, low power consumption, high efficiency and fast implementation

Active Publication Date: 2020-09-04
BEIHANG UNIV
View PDF5 Cites 12 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] In view of this, the present invention provides a convolutional neural network accelerator based on CPU-FPGA memory sharing, which solves the technical problem of applying convolutional neural networks to embedded systems with limited power consumption, and provides a solution for low-power consumption application scenarios. Convolutional Neural Networks Provide New Ways to Accelerate Computing

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Convolutional neural network accelerator based on CPU-FPGA memory sharing
  • Convolutional neural network accelerator based on CPU-FPGA memory sharing
  • Convolutional neural network accelerator based on CPU-FPGA memory sharing

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0044] The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some, not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

[0045] This embodiment is designed and implemented on the Zynq UltraScale+MPSoC heterogeneous computing platform. The target network used to verify the design in the embodiment uses Yolo-Tiny, which includes 9 convolutional layers and 6 maximum pooling layers.

[0046] Such as figure 1 and 2As shown, the embodiment of the present invention discloses a convolutional neural network accelerator based on CPU-FPGA memory sharing, including: a CPU processing subsys...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a convolutional neural network accelerator based on CPU-FPGA memory sharing. A CPU processing subsystem comprises an input control module, a configuration parameter generationmodule and an output control module; the input control module receives and caches the pixel data and the weight data; the configuration parameter generation module controls configuration parameters; the output control module controls data transmission; the FPGA acceleration subsystem comprises an on-chip storage module, a calculation engine module and a control module; the on-chip storage module is used for buffering, reading and writing access of data; the calculation engine module accelerates calculation; and the control module controls the on-chip storage module to perform read-write operation on the data, and performs data exchange and calculation control with the calculation engine module. According to the method, the characteristics of high parallelism, high throughput rate and low power consumption of the FPGA can be brought into full play, and meanwhile, the flexible and efficient data processing characteristics of the CPU processor can be fully utilized, so that the whole system can efficiently and quickly realize the reasoning process of the convolutional neural network with relatively low power consumption.

Description

technical field [0001] The invention relates to the technical field of neural networks, and more specifically relates to a convolutional neural network accelerator based on CPU-FPGA memory sharing. Background technique [0002] Convolutional neural network is a kind of feed-forward neural network with convolution calculation and deep structure, and it is one of the representative algorithms of deep learning. The convolutional neural network has the ability of representation learning, and can perform translation invariant classification on the input information according to its hierarchical structure. With the introduction of deep learning theory and the improvement of numerical computing equipment, convolutional neural networks have developed rapidly and are widely used in computer vision, natural language processing and other fields. [0003] Due to the complexity of convolutional neural network calculations, general-purpose CPUs can no longer meet the computing needs. Exi...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/04G06N3/063G06N3/08G06F9/54G06F15/78
CPCG06N3/063G06N3/084G06F9/544G06F15/7817G06N3/045Y02D10/00
Inventor 姜宏旭张永华李波刘晓戬林珂玉胡宗琦
Owner BEIHANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products