Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Data compression method, data compression system and operation method of deep learning acceleration chip

A deep learning and acceleration chip technology, applied in neural learning methods, electrical digital data processing, digital data information retrieval, etc., can solve problems such as acceleration chips

Pending Publication Date: 2022-07-01
IND TECH RES INST
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

How to break through this slope limit has become a major technical bottleneck for deep learning acceleration chips

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Data compression method, data compression system and operation method of deep learning acceleration chip
  • Data compression method, data compression system and operation method of deep learning acceleration chip
  • Data compression method, data compression system and operation method of deep learning acceleration chip

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0037] Please refer to figure 1 , which is a schematic diagram of the deep learning acceleration chip 100 according to an embodiment. During the operation of the deep learning acceleration chip 100 , an external memory 200 (eg, DRAM) is required to store the trained filter coefficient tensor matrix H of the deep learning model. After the filter coefficient tensor matrix H is transferred to the temporary register 110 , the operation unit 120 performs an operation.

[0038] The researchers found that in the operation process of the deep learning acceleration chip 100 , the most time-consuming and power-consuming process is the process of transferring the filter coefficient tensor matrix H from the memory 200 . Therefore, researchers are committed to reducing the amount of data transfer between the memory 200 and the deep learning acceleration chip 100, in order to speed up the processing speed of the deep learning acceleration chip 100 and reduce power consumption.

[0039] Pl...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a data compression method, a data compression system and an operation method of a deep learning acceleration chip. The data compression method includes the following steps. And obtaining a filter coefficient tensor matrix of the deep learning model. Performing a matrix decomposition program according to the filter coefficient tensor matrix to obtain at least one sparse tensor matrix and at least one conversion matrix; the product of the conversion matrix and the filter coefficient tensor matrix is a sparse tensor matrix. The conversion matrix is an orthogonal regular matrix. And compressing the sparse tensor matrix. And storing the sparse tensor matrix and the conversion matrix, or storing the sparse tensor matrix and the reduction matrix in a memory. And the deep learning acceleration chip performs operation by using the sparse tensor matrix to obtain a convolution operation result. And the deep learning acceleration chip restores the convolution operation result by using the restoration matrix.

Description

technical field [0001] The invention relates to a data compression method, a data compression system and an operation method of a deep learning acceleration chip. Background technique [0002] With the development of deep learning technology, a deep learning acceleration chip has been developed. Through the deep learning acceleration chip, complex convolution operations can be directly calculated in hardware to speed up the operation. [0003] The deep learning acceleration chip is paired with a large-capacity memory to exchange temporary data through the system bus. Ideally, data movement and operation are parallelized at the same time. However, in practice, due to shared busbars or physical limitations, most of the time is occupied by data movement, resulting in lower performance than expected, and external memory accesses cause major power consumption. Amdahl's law also shows that the use of parallelism to improve the performance still has its upper limit. From the comp...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F16/174G06N3/04G06N3/08
CPCG06F16/1744G06N3/08G06N3/045G06N3/063G06F9/30036G06F9/30043G06F9/3001G06N3/044G06F9/3877G06F9/3836G06N3/065
Inventor 杨凯钧李国君孙际恬
Owner IND TECH RES INST
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products