Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Deep learning chip based dynamic cache allocation method and device

A deep learning and chip technology, applied in the computer field, can solve the problems that the chip storage structure is overwhelmed and unable to meet the needs of large-scale computing, and achieve the effect of reducing data access, reducing bandwidth, and rational allocation

Active Publication Date: 2018-09-11
FUZHOU ROCKCHIP SEMICON
View PDF8 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] To this end, it is necessary to provide a technical solution based on deep learning chip dynamic cache allocation to solve the problem that the chip storage structure is overwhelmed due to frequent data movement during the use of the neural network structure and cannot meet the needs of large-scale computing.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Deep learning chip based dynamic cache allocation method and device
  • Deep learning chip based dynamic cache allocation method and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0040] In order to explain in detail the technical content, structural features, achieved goals and effects of the technical solution, the following will be described in detail in conjunction with specific embodiments and accompanying drawings.

[0041] see figure 1 , a schematic structural diagram of a device for dynamic cache allocation based on a deep learning chip according to an embodiment of the present invention. The device includes a processor 101, a division information storage unit 102, a cache unit 103, an external storage unit 104, a neural network unit 105, and a statistical unit 106; the cache unit 103 includes a plurality of cache lines; the neural network The unit 105 includes a plurality of neural network sublayers, each neural network sublayer corresponds to a statistical unit 106; the neural network unit 105 is connected to the cache unit 103, and the cache unit 103 is connected to the processor 101 and the statistical unit 106 respectively. Connection; the...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a deep learning chip based dynamic cache allocation method and device. With a cache unit, the device enables a large amount of data access of the neural network to be completedinside the chip, which reduces the data access of the neural network to the external storage, reduces the bandwidth requirement for external storage, and finally achieves the purpose of reducing the bandwidth. At the same time, the allocation of the cache unit ratio is determined according to the external data throughput of each neural network sub-layer, so that the limited cache space is more rationally allocated, and the computational efficiency of the neural network is effectively improved.

Description

technical field [0001] The present invention relates to the field of computer technology, in particular to a method and device for dynamic cache allocation based on deep learning chips. Background technique [0002] With the rapid development of artificial intelligence technology, people's performance requirements for artificial intelligence equipment are also getting higher and higher. At present, a major factor restricting the rapid development of deep learning neural network equipment is that the terminal neural network chip has too much demand for bandwidth, and at the same time, the slow speed of accessing external memory also greatly limits the computing speed of the neural network. [0003] The structure and operation of the neural network require a large amount of data transfer, such as neurons, weights, thresholds, convolution kernel data reading, and intermediate calculation results of each layer of neural network, error calculation and write-back during feedback t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/04G06N3/063
CPCG06N3/063G06N3/045
Inventor 廖裕民张钟辉
Owner FUZHOU ROCKCHIP SEMICON
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products