Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Array calculation accelerator architecture based on binary neural network

A binarized neural and neural network technology, applied in the field of array computing accelerator architecture, can solve the problems of slow speed, high computing power consumption, low reconfigurability, etc., to reduce area, meet computing requirements, and enhance versatility Effect

Pending Publication Date: 2022-01-28
UNIV OF ELECTRONIC SCI & TECH OF CHINA
View PDF2 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] In order to solve the problems of the prior art, the present invention provides an array computing accelerator architecture based on a binary neural network to overcome the problems of high power consumption, slow speed, and low reconfigurability of current neural network models.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Array calculation accelerator architecture based on binary neural network
  • Array calculation accelerator architecture based on binary neural network
  • Array calculation accelerator architecture based on binary neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0026] Below in conjunction with accompanying drawing, technical solution of the present invention is described in detail:

[0027] Currently, in order to complete matrix multiplication operations, the processing units in the array usually need to be composed of a multiplier, an adder and a register, which are respectively used to complete the functions of multiplication, accumulation, part and storage. However, the implementation of the multiplier will occupy a large area of ​​the chip, and the execution of the multiplication operation will cause a certain delay and generate large power consumption. These problems limit the performance improvement of the computing array to a certain extent.

[0028] The calculation performed by the FC (Full Connection) layer in the neural network is essentially matrix multiplication. For this calculation process, the calculation core will operate in the form of block matrix multiplication. The general process is: for a matrix A with i rows an...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention belongs to the technical field of integrated circuits and neural networks, and particularly relates to an array calculation accelerator architecture based on a binary neural network. According to the invention, the processing unit in the calculation core adopts the alternative selector to replace a multi-bit multiplier, so as to accelerate the calculation of the FC layer of the binary neural network, and greatly reduce the chip storage and calculation area, the calculation delay and the power consumption; and meanwhile, a configurable neural network function module is integrated in the accelerator, so that the calculation requirements of various current neural network algorithm models are met to a great extent, and the universality of the accelerator is enhanced.

Description

technical field [0001] The invention belongs to the field of integrated circuit technology and neural network technology, and in particular relates to an array computing accelerator architecture based on a binary neural network. Background technique [0002] With the optimization of integrated circuit design schemes and the continuous improvement of the level of integrated circuit technology, the performance of contemporary processors and memories has achieved a qualitative leap, but today the performance of computers has encountered a bottleneck. As a classic computer structure, the von Neumann architecture limits the data interaction capability between the processor and the memory. This bottleneck is significantly reflected in the calculation of the neural network model. The neural network model has the characteristics of a large number of parameters and many multiplication and addition operations. When performing neural network model calculations, frequently calling param...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06N3/063
CPCG06N3/063Y02D10/00
Inventor 胡绍刚李天琛乔冠超于奇刘洋
Owner UNIV OF ELECTRONIC SCI & TECH OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products