Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Computing unit, array, module, hardware system and implementation method

A computing unit and array technology, applied in the field of artificial intelligence algorithm hardware acceleration, can solve problems such as high power consumption, insufficient computing time, invalid data processing, etc., to achieve low power consumption, reduce computing time, and reduce computing redundancy. Effect

Pending Publication Date: 2019-07-30
南京宁麒智能计算芯片研究院有限公司
View PDF1 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although this invention also improves the overall efficiency from the perspective of hardware architecture, it does not process invalid data, resulting in high power consumption and insufficient optimization of calculation time

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Computing unit, array, module, hardware system and implementation method
  • Computing unit, array, module, hardware system and implementation method
  • Computing unit, array, module, hardware system and implementation method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0032] Such as figure 1 Shown, a kind of hardware system comprises on-chip equipment and off-chip equipment, and on-chip equipment comprises control module, configuration module, storage module, calculation module and bus interface, and off-chip equipment comprises CPU and external storage module; The CPU and off-chip equipment of off-chip equipment The control module of the on-chip device is electrically connected, the external memory of the off-chip device is electrically connected to the storage module of the on-chip device, the control module of the on-chip device is electrically connected to the configuration module, storage module and computing module of the on-chip device, and the configuration module of the on-chip device is connected to the on-chip device. The memory module of the device is electrically connected to the computing module, and the memory module of the on-chip device is electrically connected to the computing module.

[0033]When the system is performing...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a computing unit, an array, a module, a hardware system and an implementation method, and belongs to the field of artificial intelligence algorithm hardware acceleration. Aiming at the problems of huge data and long calculation time of a sparse convolutional neural network algorithm in the prior art, an invalid data removal mechanism is designed in a calculation unit, invalid weights or input image data can be removed, the calculation time is reduced, and the power consumption caused by multiplication and accumulation calculation is reduced; a multi-channel sub-computing unit is designed, a multiplexing accumulation channel mechanism is adopted to complete convolution operation, and resource consumption is reduced; under the condition that invalid data is removed, asupply number rotation mechanism is further designed, and sufficient supply number of the calculation unit can be kept; the method is low in power consumption, small in area, high in throughput rateand high in recognition speed, is suitable for application of mobile terminals, such as smart home and smart city, and can efficiently complete license plate recognition, face recognition and the like.

Description

technical field [0001] The invention relates to the field of artificial intelligence algorithm hardware acceleration, in particular to a computing unit, an array, a module, a hardware system and an implementation method. Background technique [0002] Convolutional Neural Network (CNN) is a feedforward neural network that has a wide range of applications in the field of artificial intelligence, including image recognition, big data processing, and natural language processing. In order to improve the accuracy of the algorithm, the model structure of the convolutional neural network is becoming more and more complex and the depth is increasing. The resulting large model parameters and long calculation time hinder the deployment of the algorithm in terminals, such as smart homes and smart transportation. and other IoT applications. These issues have led to intensive research on the algorithm and hardware design of convolutional neural networks in pursuit of low power consumptio...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F15/78G06F7/50G06N3/04G06N3/063
CPCG06F15/7807G06F7/50G06N3/063G06N3/045Y02D10/00
Inventor 李丽陈沁雨傅玉祥曹华锋何书专
Owner 南京宁麒智能计算芯片研究院有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products