Operational circuit of neural network

A neural network and computing circuit technology, applied in the field of artificial intelligence, can solve problems such as lack of flexibility, lack of generalization ability, and lower precision requirements, and achieve the effects of saving hardware resources, saving multiplication power consumption, and increasing configurability

Active Publication Date: 2020-10-02
重庆联芯致康生物科技有限公司
View PDF12 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] At present, most of the neural network hardware design focuses on the inference speed of the neural network, and the accuracy requirements are low; or the accuracy requirements are reduced in order to save hardware resources; or the accuracy requirements are high, but there is no dynamic adjustment. The flexibility, so that it can only serve a specific task requirement in a single way, lacking generalization ability
Although the weight pruning and hardware zero-jumping operations can make the neural network sparse and reduce the number of multiplication operations to a certain extent, the pruning operation of the neural network weight usually brings a certain degree of precision loss. The greater the weight of the value, the greater the loss of precision. Therefore, it is necessary for the designer to adjust the sparsity of the pruned neural network subjectively for different neural networks and applications, and adaptive adjustment cannot be achieved.
Although the compression and transmission of weights through run-length coding can increase transmission speed and reduce transmission power consumption, this coding compression method lacks flexibility and cannot meet different neural network compression requirements, such as determining the length of the compressed data according to the length and width of the feature map. and many more

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Operational circuit of neural network
  • Operational circuit of neural network
  • Operational circuit of neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0040] In order to make the purpose, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the implementation methods and accompanying drawings.

[0041] In order to realize the approximate computing hardware implementation of high precision and low power consumption of the convolution operation involved in the neural network, the computing circuit of the neural network of the present invention includes a data control unit, a weight data storage unit, and a feature map data storage unit , a convolution calculation unit, a data scaling unit, a data accumulation buffer unit, a truncation control unit and a convolution result data storage unit. The schematic diagram of the hardware architecture of the computing circuit can be referred to figure 1 ,Right now figure 1 The multiple multipliers shown in are arrayed to form the convolution calculation unit of this embodiment. Among them...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an operational circuit of a neural network, and belongs to hardware implementation of the neural network. According to the method, scaling factors are configured for output data of a multiplier in convolution calculation so as to dynamically adjust the size of the output data; the calculated result data is dynamically quantitized during accumulation operation in convolutionto ensure the effective precision of the data so as to keep the scale of the data on the same layer uniform, and saturated interception is adaptively performed through data overflow detection duringinterception operation of the result data. According to the invention, threshold judgment is carried out on the data entering the multiplier based on the proposed self-adaptive threshold adjustment technology; therefore, under the condition that the precision is guaranteed, part of the numbers close to the zero value bypass multiplication calculation, and based on the proposed configurable compression transmission technology, configurable improvement is conducted on the run length coding mode, so that the run length coding mode meets different network compression requirements. According to theinvention, the data precision of the neural network reasoning process is improved, and the hardware resource overhead is saved.

Description

technical field [0001] The invention belongs to the technical field of artificial intelligence, and specifically relates to a hardware implementation of a neural network. Background technique [0002] Neural network hardware refers to a hardware system that supports the scale of the simulated neural network model and the speed of neural computing. Its main implementation hardware includes FPGA (Field Programmable Gate Array) implementation, neural chip and DSP (Digital Signal Processing) accelerator board, etc. Its hardware The core of the realization is the design of the neural network architecture. As an important part of the implementation of artificial intelligence technology, neural network hardware has gradually become a research hotspot. Especially for the architecture design of neural network hardware, it has been widely used in cloud, terminal and other applications. application. [0003] The main function of the neural network hardware is to accelerate the neural ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/04G06N3/063
CPCG06N3/063G06N3/045
Inventor 周军刘野阙禄颖刘青松
Owner 重庆联芯致康生物科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products