Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A dynamic caching approach for addressing memory bandwidth efficiency in general-purpose AI processors

A storage bandwidth and dynamic cache technology, applied in memory systems, electrical digital data processing, instruments, etc., can solve problems such as the inability to meet the needs of new general-purpose AI processors, and achieve the effects of reducing memory overhead, increasing bandwidth, and improving efficiency

Active Publication Date: 2021-06-22
轸谷科技(南京)有限公司 +1
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0031] In view of the high-bandwidth data flow requirements of general-purpose AI processors, existing data caches need to transmit data in the basic unit of cache lines, and the data exchanged each time is the size of a single cache line, which obviously cannot meet the needs of new general-purpose AI processing. device needs

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A dynamic caching approach for addressing memory bandwidth efficiency in general-purpose AI processors
  • A dynamic caching approach for addressing memory bandwidth efficiency in general-purpose AI processors
  • A dynamic caching approach for addressing memory bandwidth efficiency in general-purpose AI processors

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0054] The present invention will be further described now in conjunction with accompanying drawing.

[0055] Such as Figure 1 to Figure 7 As shown, the dynamic caching method used to solve the storage bandwidth efficiency of a general-purpose AI processor adds a bit flag C in the cache line to form a data segment cache, and the data segment is continuously stored in several conventional cache line data Medium; follow the steps below to read data when reading cached data:

[0056] S1. According to the read instruction given by the CPU, it is judged whether it is conventional data reading or data segment reading. If it is conventional data, it will be read according to the conventional data reading steps. If it is data segment reading, it will enter S2;

[0057] S2. Determine the position of the data segment in the cache according to the index field in the start address of the data segment;

[0058] S3, comparing the tag of the start address of the data segment with the tag ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention relates to a dynamic caching method for solving the storage bandwidth efficiency of a general-purpose AI processor. A bit flag C is added to the cache line, and the data segments are continuously stored in the conventional cache line data, and the data is read according to the following steps: S1 judges according to the CPU read instruction, if the data segment is read, then enters S2; S2 determines the position of the data segment according to the index field; S3 compares the tag of the data segment with the tag in the cache, and if they are consistent, enters S4; S4 checks the V bit, If the V bit is 1, enter S5; S5 checks the C bit, and if the C bit is 1, enters S6; S6 checks the C bit in the cache after the start address according to the length of the data to be read by the read instruction. If the data is stored continuously The C bits of all are set to 1, and the data length meets the read data length, then continuous reading is performed, otherwise a cache miss occurs. It can meet the computing needs of CPU and AI processor at the same time.

Description

technical field [0001] The invention relates to a data cache, in particular to a dynamic cache method for solving storage bandwidth efficiency of a general AI processor. Background technique [0002] At present, artificial intelligence is widely used in various fields. Deep neural network technology has become a representative algorithm in the field of artificial intelligence. Key technologies such as character recognition, image classification, or speech recognition based on deep neural network technology have been widely used in search engines and intelligence. products such as mobile phones. [0003] The core computing unit of deep neural network technology is multiply-accumulate operation. Multiply-accumulator array is often used in matrix multiplication operation. Therefore, MAC (multiply-accumulator) array is the core of AI calculation. More and more general-purpose computing chips cater to AI For computing needs, a MAC array is specially added to improve computing po...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F12/0886G06F12/0868
CPCG06F12/0868G06F12/0886
Inventor 蔡浩田沈亚明葛悦飞
Owner 轸谷科技(南京)有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products