Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Convolutional Neural Network Hardware Accelerator System Based on Convolution Kernel Splitting and Its Computing Method

A technology of convolutional neural network and hardware accelerator, which is applied in the computing field of large-scale neural network, can solve problems such as inflexibility, achieve the effect of simplifying convolution calculation, reducing the use of computing hardware resources, and avoiding large-scale calculation

Active Publication Date: 2020-11-27
HEFEI UNIV OF TECH
View PDF9 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although convolutional neural networks can theoretically be implemented in large-scale parallelism in hardware, they are limited by bandwidth, computing resources, storage and other resources, especially in the face of large-scale convolution calculations.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Convolutional Neural Network Hardware Accelerator System Based on Convolution Kernel Splitting and Its Computing Method
  • Convolutional Neural Network Hardware Accelerator System Based on Convolution Kernel Splitting and Its Computing Method
  • Convolutional Neural Network Hardware Accelerator System Based on Convolution Kernel Splitting and Its Computing Method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0034] In this example, if figure 1 As shown, it is a convolutional neural network hardware accelerator system based on the convolution kernel splitting method, which is used to cooperate with the host computer to accelerate large-scale convolution operations in the convolutional neural network through hardware circuits, including: zero padding module, Control module, convolution kernel and data splitting module, convolution kernel weight cache module, data cache module, on-chip address index module, core computing module and intermediate result cache module;

[0035] First, the host will need neural network parameters for accelerated operations, including the number of convolutional layers and pooling layers, the size of input data for each layer, step size, convolution kernel size, weight data and image data stored in off-chip memory The start address of the area is given to the control module. The control module controls the on-chip address index module to generate the add...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a convolutional neural network hardware accelerator system based on convolution kernel splitting and a calculation method. The system comprises a zero filling module, a controlmodule, a convolution kernel and data splitting module, a convolution kernel weight cache module, a data cache module, an on-chip address index module, a core calculation module and an intermediate result cache module. The zero filling module is used for carrying out zero filling processing on the convolution kernel weight and the picture data; the control module is used for controlling the related modules to operate; the convolution kernel and data splitting module is used for generating a splitting control signal; the convolution kernel weight caching module and the data caching module areused for storing zero-filled convolution kernel weight and picture data; the on-chip address index module is used for generating an address index; the core calculation module is used for calculating data; and the intermediate result caching module is used for storing an intermediate calculation result. According to the system, the operation parallelism can be improved, the hardware complexity is reduced, and therefore the system is suitable for large-scale convolution calculation.

Description

technical field [0001] The invention relates to the calculation of large-scale neural network, in particular to hardware parallel acceleration calculation of large-scale convolution calculation of neural network. Background technique [0002] The convolutional neural network (CNN) first originated in the 1960s. Two neurobiologists, Hubel and Wiesel, discovered that different cells in the cat's visual cortex would be activated according to the direction of light, and thus established a network of cells. The model of activation and transformation of images laid the foundation for the emergence of convolutional neural networks. By 1980, the Japanese scientist K. Fukushima proposed the concept of a neurocognitive machine, which is considered to be the first implementation prototype of a convolutional neural network. [0003] With the rise of artificial intelligence deep learning in recent years, convolutional neural networks have received more and more attention. On the one ha...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06N3/063G06N3/04
Inventor 倪伟梁修壮储萍徐春琳王月恒
Owner HEFEI UNIV OF TECH
Features
  • Generate Ideas
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More