Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Neural network processor based on weight compression, design method, and chip

A neural network and processor technology, applied in the field of hardware acceleration of neural network model calculation, can solve the problems of data 0 speeding up the calculation speed, and the calculation power consumption cannot be skipped, so as to improve the calculation speed, improve the energy efficiency, and reduce the occupation. Effect

Active Publication Date: 2017-03-22
中科时代(深圳)计算机系统有限公司
View PDF4 Cites 101 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Benefits of technology

The technical effect of this patented method described in the previous figure was improved performance for Neural Networks (NN) due to its ability to efficiently process data without slowdown or consume too much power during training time. This compressed form also helps reduce resource usage on other parts within the system.

Problems solved by technology

This patented technical problem addressed in this patents relates to improving the computation speed and reducing resource usage required in deep convolutional neural networks due to their high dimensionality and complex structures. Current solutions involve performing computations at once, leading to increased latency and decreased energy utilization compared to current approaches like CPU cores. To solve this issue, we propose new techniques called Multiplex Computational Layer Memory Access Machine Learning (MLL) described in detail below.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Neural network processor based on weight compression, design method, and chip
  • Neural network processor based on weight compression, design method, and chip
  • Neural network processor based on weight compression, design method, and chip

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0037] When studying the neural network processor, it was found that the weight of the neural network has a certain degree of sparsity, and there are a large number of weights with a value of 0. These weights and data have no numerical impact on the operation results after multiplication and addition operations. These values A weight of 0 will occupy a large amount of on-chip resources and consume excess working time in the process of storage, loading, and calculation, and it is difficult to meet the performance requirements of the neural network processor.

[0038] After analyzing the calculation structure of the existing neural network processor, it is found that the weight value of the neural network can be compressed to achieve the purpose of speeding up the operation and reducing energy consumption. The prior art provides the basic architecture of the neural network accelerator. Based on existing technology, a weight compression storage format is proposed. After weight dat...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention brings forward a neural network processor based on weight compression, a design method, and a chip. The processor comprises at least one storage unit used for storing operation instructions and data participating in calculation; at least one storage unit controller used for controlling the storage unit; at least one calculation unit used for executing calculation operation of a neural network; a control unit connected with the storage unit controller and the calculation unit for obtaining the instructions stored by the storage unit via the storage unit controller and analyzing the instructions so as to control the calculation unit; and at least one weight retrieval unit used for retrieving weights, wherein each weight retrieval unit is connected with the calculation unit so as to ensure correct operation on the compressed weights and the corresponding data. According to the invention, less weight resources in a neural network processor are occupied, the operation speed is improved, and the energy efficiency is improved.

Description

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Owner 中科时代(深圳)计算机系统有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products