In-memory computing device suitable for binary convolutional neural network computing

A binary convolution neural and computing device technology, applied in the field of integrated circuits, can solve the problems of reducing computing speed and wasting power consumption, and achieve the effects of increasing computing speed, avoiding data exchange, and reducing chip power consumption

Active Publication Date: 2020-05-08
FUDAN UNIV
View PDF8 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In this way, not only the calculation speed is reduced, but also the power consumption is wasted in the process of data transfer

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • In-memory computing device suitable for binary convolutional neural network computing
  • In-memory computing device suitable for binary convolutional neural network computing
  • In-memory computing device suitable for binary convolutional neural network computing

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0025] The present invention will be further described in detail below in conjunction with embodiments and drawings. The embodiments provided by the present invention should not be regarded as limited to the embodiments set forth herein.

[0026] The embodiment is an in-memory computing device suitable for binary convolutional neural network computing. figure 1 Block diagram of its top-level circuit module.

[0027] The device includes a 256x128 memory calculation array, a 128 input addition tree, a static random storage unit for storing intermediate results and a corresponding accumulator group for updating the intermediate results, a post-processing quantization unit and control unit.

[0028] Each row of the in-memory calculation array can store the weight or 128 input channels of the input feature map. The control unit selects two corresponding rows according to the weight and the input feature map address to complete the exclusive OR operation.

[0029] The exclusive OR output r...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention belongs to the technical field of integrated circuits, and particularly relates to an in-memory computing device suitable for binary convolutional neural network computing. The device comprises an in-memory computing array based on a static random access memory and used for realizing inter-vector exclusive-OR operation; a multi-input addition tree which is used for accumulating exclusive-OR results in different input channels; a storage unit is used for temporarily storing an intermediate result; the accumulator group is used for updating the intermediate result; the post-processing quantization unit is used for quantizing the high-precision accumulation result into a one-bit output characteristic value; and the control unit is used for controlling the calculation process andthe data flow direction. According to the device, the XOR operation in the binary neural network can be completed while the input data is stored, and frequent data exchange between the storage unit and the calculation unit is avoided, so that the calculation speed is improved, and the power consumption of the chip is reduced.

Description

Technical field [0001] The invention belongs to the technical field of integrated circuits, and in particular relates to an in-memory calculation device suitable for calculation of a binary convolutional neural network. Background technique [0002] Today, thanks to the continuous development of deep convolutional neural networks, it is widely used in various fields such as image classification, autonomous driving, target recognition and tracking, and speech recognition. In order to pursue higher accuracy, the number and width of deep convolutional networks are increasing, and the increased amount of calculation and data storage make it unsuitable for terminal computing devices with limited computing resources and power capacity. [0003] For the conflict between the above-mentioned deep convolutional neural network algorithm and hardware implementation, various quantization methods have emerged. Low-precision and even binary convolutional neural networks can achieve performance c...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/06G06F1/3234
CPCG06N3/063G06F1/3234Y02D10/00
Inventor 刘诗玮陈迟晓张怡云史传进
Owner FUDAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products