Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Convolutional neural network hardware acceleration method of parallel computing unit

A convolutional neural network and parallel computing technology, applied to biological neural network models, computer components, calculations, etc., can solve problems such as reducing the amount of calculation, inappropriate calculation parameters, and not considering the generality of the model parameter network structure, etc., to achieve The effect of significant effect and high inference speed

Pending Publication Date: 2022-07-29
FUZHOU UNIV
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the complex calculations and large-scale calculation parameters of convolutional neural networks are not suitable for embedded devices with limited hardware resources. How to deploy convolutional neural networks on edge embedded devices and make them intelligent terminals has become a topic in academia and industry. research hotspots
[0003] At present, there are already many products on the market that implement hardware acceleration for convolutional neural networks based on parallel computing modules. The vector calculation module and other methods, but these schemes do not consider the generality of the quantization method of the model parameters for the network structure

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Convolutional neural network hardware acceleration method of parallel computing unit
  • Convolutional neural network hardware acceleration method of parallel computing unit
  • Convolutional neural network hardware acceleration method of parallel computing unit

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0029] The present invention will be further described below with reference to the accompanying drawings and embodiments.

[0030] Please refer to figure 1 , the present invention provides a convolutional neural network hardware acceleration method based on static quantization, hierarchical quantization and parallel computing units, comprising the following steps:

[0031] Step S1: the trained convolutional neural network model is quantified according to the method of static quantization, hierarchical quantization;

[0032] In this embodiment, specifically, step S1 is specifically:

[0033] Step S11: First, extract parameters from the model, visualize them as a histogram, and observe the distribution of parameters. figure 2 and image 3 The distributions of the example first convolutional layer and second convolutional layer are drawn respectively in . From the figure, we can see that most of the weight parameters of each layer are concentrated in a certain interval. For ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a convolutional neural network hardware acceleration method of a parallel computing unit. The method comprises the following steps of S1, performing weight parameter quantization on a trained convolutional neural network model according to a static quantization method and a layered quantization method; s2, performing operation on the quantized convolutional neural network model through a parallel computing unit preset on a hardware circuit; and S3, according to different calculation parallelism degrees of the convolutional neural network models with different input sizes in a reasoning stage, self-adaption to the convolutional neural network models with different convolution kernel sizes is carried out through an on-chip reconfigurable technology. According to the method, the purpose of convolutional neural network hardware acceleration can be achieved, and high reasoning speed is achieved under the condition of low power consumption.

Description

technical field [0001] The invention relates to the technical field of computer hardware, in particular to a convolutional neural network hardware acceleration method based on static quantization, hierarchical quantization and parallel computing units. Background technique [0002] In recent years, with the development of artificial intelligence technology, algorithms represented by convolutional neural networks have achieved excellent results in the application of computer vision. However, the complex calculation volume and large-scale computing parameters of convolutional neural networks are not suitable for embedded devices with limited hardware resources. How to deploy convolutional neural networks on edge embedded devices and make them into intelligent terminals has become a topic in academia and industry. research hotspot. [0003] At present, there are many products on the market that implement hardware acceleration for convolutional neural networks based on parallel...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/063G06N3/04G06K9/62
CPCG06N3/063G06N3/045G06F18/214
Inventor 林志贤王利翔林珊玲郭太良林坚普叶芸张永爱吴宇航赵敬伟梅婷
Owner FUZHOU UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products