Unlock instant, AI-driven research and patent intelligence for your innovation.

Addition network-oriented near-storage neural network accelerator and acceleration method thereof

A neural network and additive network technology, applied in the field of near-storage neural network accelerators and their acceleration, can solve the problems of low reasoning speed, high power consumption, and low efficiency, and achieve high utilization, low power consumption, and reduced energy consumption Effect

Pending Publication Date: 2021-08-27
SOUTHEAST UNIV
View PDF0 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The purpose of the present invention is to provide a near-memory neural network accelerator and its acceleration method oriented to the addition network, so as to solve the technical problems of large power consumption, low reasoning speed and low efficiency in the existing reasoning calculation process

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Addition network-oriented near-storage neural network accelerator and acceleration method thereof
  • Addition network-oriented near-storage neural network accelerator and acceleration method thereof
  • Addition network-oriented near-storage neural network accelerator and acceleration method thereof

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0039] In order to better understand the purpose, structure and function of the present invention, a near-memory neural network accelerator and its acceleration method oriented to the addition network of the present invention will be further described in detail in conjunction with the accompanying drawings.

[0040] figure 1 It is the architecture diagram of the accelerator of this embodiment, including an instruction generation unit, a calculation unit group and a post-processing unit.

[0041] The instruction generation unit consists of a storage unit, a comparator array and an instruction pool. The cache unit is used to cache quantized data. The comparator array is used to compare quantized data to generate symbols and compose instructions. The instruction pool is used to temporarily store the generated instructions and send them to corresponding computing units in turn.

[0042]The computing unit group includes a plurality of independent computing units, and the number ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an addition network-oriented near-storage neural network accelerator and an acceleration method of the near-storage neural network accelerator. The neural network accelerator comprises an instruction generation unit, a calculation unit group and a post-processing module. The accelerator compares the low-bit weight and activation in the instruction generation unit to generate an instruction, the instruction is transmitted to a specific calculation unit in the calculation unit group to guide calculation of the full-precision weight or activation stored in the specific calculation unit, output is transmitted to the post-processing unit according to the instruction to be processed, and a final result is generated. According to the method, symbol extraction and full-precision data calculation are separated, a reconfigurable and extensible near-storage calculation unit is designed, different neural network reasoning tasks can be conveniently adapted. Meanwhile, data compression and grouping are carried out on an addition network model, only non-sparse full-precision data are reserved, and the model size and the calculation frequency are reduced.

Description

technical field [0001] The invention belongs to the technical field of deep neural network acceleration computing, and in particular relates to an additive network-oriented near-storage neural network accelerator and an acceleration method thereof. Background technique [0002] Convolution Neural Network (CNN) is a machine learning algorithm that has achieved good results and has been widely used in computer vision, speech recognition, natural language processing and other fields. But the convolutional part of the CNN algorithm is computationally intensive and takes up huge storage space, so when we deploy convolutional neural networks on end devices and embedded devices, we usually use application-specific integrated circuits (ASICs) to accelerate them to Achieve low power consumption and high performance features. [0003] The additive neural network (AdderNet) is a kind of convolutional neural network, which uses the sum of absolute difference (Sum of Absolute Difference...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/063G06N3/08G06F1/3234
CPCG06N3/063G06N3/082G06F1/3234
Inventor 齐志凌星宇刘昊史旭龙
Owner SOUTHEAST UNIV