Unlock instant, AI-driven research and patent intelligence for your innovation.

Memristor-based programmable neural network accelerator

A neural network and memristor technology, applied in the field of new intelligent computing processors, can solve the problems of increasing bandwidth and energy consumption, and achieve the effects of reducing power consumption and bandwidth requirements, low power consumption, and high flexibility

Active Publication Date: 2021-12-31
ZHEJIANG LAB
View PDF2 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, with the increase of network scale and data scale, the bandwidth and energy consumption required by traditional von Neumann architecture AI chips are increasing, and the problem of "memory wall" caused by its architecture has become increasingly prominent.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Memristor-based programmable neural network accelerator
  • Memristor-based programmable neural network accelerator
  • Memristor-based programmable neural network accelerator

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0022] In order to make the purpose, technical solution and technical effect of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings.

[0023] Such as figure 1 As shown, a memristor-based programmable neural network accelerator of the present invention is connected in series with the SOC bus through an interface, and consists of 8 parts, which are respectively an instruction memory, an instruction fetching unit, an instruction decoding unit, a control unit, Arithmetic logic unit, vector processing unit, data memory, memristor storage and calculation unit. Wherein, the control unit is connected with the global unit module. The instruction memory, the instruction fetching unit and the instruction decoding unit are sequentially connected as one, responsible for the operation and storage of instructions, that is, after the instructions are accessed and decoded, they are transmitted to the cont...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention belongs to the field of novel intelligent computing processors, and relates to a memristor-based programmable neural network accelerator which is connected in series with an SOC bus through an interface and comprises an instruction processing module, a control unit and an execution unit module. The control unit is in control connection with the instruction processing module and the execution unit module, the instruction processing module is formed by sequentially connecting an instruction memory, an instruction fetching unit and an instruction decoding unit into a whole, transmits instruction information to the control unit after performing access decoding on an instruction, and transmits data on the instruction to the execution unit module; the execution unit module comprises an arithmetic logic unit, a vector processing unit, a data memory and a memristor storage unit; the arithmetic logic unit and the vector processing unit are respectively and correspondingly responsible for register calculation and vector calculation; and the data memory is connected with the memristor storage unit and then is connected with the vector processing unit. The accelerator has the advantages of high flexibility, low bandwidth requirement, low power consumption and high degree of parallelism.

Description

technical field [0001] The invention belongs to the field of novel intelligent computing processors, and relates to a memristor-based programmable neural network accelerator. Background technique [0002] With the large-scale application of artificial intelligence, AI chips have gradually become popular. Compared with traditional processing chips, AI chips have higher computing power and energy efficiency. Most of the current AI chips for deep learning network acceleration are mainly optimized for parallel computing to improve the performance of their chips, such as IBM's TrueNorth, Google's TPU, and Cambrian's DIANNAO. However, as the network scale and data scale increase, the bandwidth and energy consumption required by the traditional von Neumann architecture AI chip are increasing, and the "memory wall" problem caused by its architecture has become increasingly prominent. Therefore, both academia and industry are constantly exploring a new architecture system and techn...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/04G06N3/063G06N3/08
CPCG06N3/063G06N3/08G06N3/045
Inventor 顾子熙时拓张程高高丽丽王志斌李一琪
Owner ZHEJIANG LAB