Distributed bit line compensation digital-analog hybrid in-memory computing array

A technology for calculating arrays and compensation numbers, which is applied in the field of distributed bit line compensation digital-analog hybrid in-memory calculation arrays, and can solve the problems that calculation results cannot be accurately quantized, word lines and bit lines are heavily loaded, and the accuracy of calculation results is affected. Achieve the effect of improving charging nonlinearity, small word line drive load, and saving area

Active Publication Date: 2022-05-24
中科南京智能技术研究院
View PDF15 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] The calculation method of traditional single-bit input multiplied by single-bit weight is inefficient, and a single calculation unit consumes a large number of transistors, and because the weight is connected to the source and drain of the calculation transistor, it will cause damage to the bit line voltage when the calculation process is too large. Not only that, when the same column contains multiple calculation units for calculation, if the number of effective calculation units is too large, the increase of the coupling capacitor voltage and the number of effective calculation units will have a nonlinear relationship, which will cause the calculation results to fail. Accurately quantified; secondly, the traditional arrangement of large-array storage and calculation units will cause excessive load on the word line and bit line, which will cause the pulse signal on the word line to attenuate significantly. If the word line pulse width becomes narrow after the attenuation After that, the charging time of the effective calculation unit to the coupling capacitor will be shortened, which will also affect the accuracy of the calculation result

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Distributed bit line compensation digital-analog hybrid in-memory computing array
  • Distributed bit line compensation digital-analog hybrid in-memory computing array
  • Distributed bit line compensation digital-analog hybrid in-memory computing array

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0042] The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some, not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

[0043] The purpose of the present invention is to provide a distributed bit line compensation digital-analog hybrid memory calculation array, which uses 8T calculation units, which relatively saves the number of transistors, and eliminates the need for reading due to the decoupling of calculation logic and weight storage units in the calculation multiplication stage Write interference; at the same time, the current mirror compensator proposed in this design cor...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to the technical field of in-memory computing, in particular to a distributed bit line compensation digital-analog hybrid in-memory computing array, which comprises an in-memory computing module, an output module, a main control module and an input driving module, the main control module is respectively connected with the output module and the input driving module; the in-memory calculation module comprises four cluster in-memory calculation sub-modules, and each cluster in-memory calculation sub-module comprises four groups of in-memory calculation units; each group of in-memory calculation units comprises 32 rows * 8 columns of storage calculation circuits which are distributed in an array; the storage calculation circuits in each row are connected in parallel and then connected with the input driving module; each column of storage calculation circuits is connected in series, each column of storage calculation circuits is connected in series with a current mirror compensator, and each column of storage calculation circuits is connected with the output module through a coupling capacitor. According to the invention, the number of transistors is reduced, read-write interference is eliminated, and the problem of charging and accumulated voltage nonlinearity of a plurality of calculation units is corrected.

Description

technical field [0001] The invention relates to the technical field of in-memory computing, in particular to a distributed bit line compensation digital-analog hybrid in-memory computing array. Background technique [0002] In recent years, artificial intelligence (AI) has increasingly demanded energy-efficient computing systems, including edge intelligence and its applications, and von Neumann architectures are widely used to support various tasks using processing units (PEs), control units, and memory . Since the advent of artificial intelligence systems and deep neural networks (DNNs), von Neumann architectures have struggled to adapt to DNNs. DNNs in AI systems require massive parallel product (MAC) operations. During MAC operation, data transfer of a large number of weights and intermediate outputs is unavoidable between the processing unit (PE) and the memory, which leads to unavoidable power consumption and delay, which limits some AI applications, such as battery ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G11C11/54G11C11/418G11C11/412G11C7/12G11C8/08G11C5/06G06N3/063
CPCG11C11/54G11C11/418G11C11/412G11C7/12G11C8/08G11C5/063G06N3/065Y02D10/00
Inventor 乔树山史万武尚德龙周玉梅
Owner 中科南京智能技术研究院
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products