Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

CPU (central processing unit) and GPU (graphic processing unit) on-chip cache sharing method and device

A cache and shared chip technology, applied in the field of CPU and GPU shared on-chip cache, can solve the problems of CPU and GPU performance impact, delay sensitivity, GPU cannot process images in real time, etc., to achieve the effect of improving program performance and high performance

Active Publication Date: 2014-07-16
NAT UNIV OF DEFENSE TECH
View PDF5 Cites 30 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the limited memory bandwidth on-chip is difficult to meet the high bandwidth requirements of CPU and GPU at the same time, which affects the performance of both CPU and GPU.
In addition, the memory access characteristics of CPU and GPU are also quite different, and different requirements are put forward for the characteristics of on-chip cache.
The CPU's memory access request is delay-sensitive, and it requires its memory access request to be served quickly; while the GPU's memory access request is bandwidth-sensitive, it requires high-bandwidth services, otherwise the GPU cannot process the image to be displayed in real time
To sum up, the shared use of on-chip cache affects the performance of CPU and GPU to a certain extent, making it impossible to meet both the low-latency requirements of the CPU and the high-bandwidth requirements of the GPU.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • CPU (central processing unit) and GPU (graphic processing unit) on-chip cache sharing method and device
  • CPU (central processing unit) and GPU (graphic processing unit) on-chip cache sharing method and device
  • CPU (central processing unit) and GPU (graphic processing unit) on-chip cache sharing method and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0057] Such as figure 1 As shown, the implementation steps of the method for sharing the on-chip cache between the CPU and the GPU in this embodiment are as follows:

[0058] 1) Classify and cache memory access requests from the CPU and memory access requests from the GPU;

[0059] 2) Arbitrate for different types of cached memory access requests, and the memory access requests that win the arbitration enter the pipeline;

[0060] 3) Check the request type of the memory access request entering the pipeline. If the memory access request is a memory access request from the CPU, the read and write data of the memory access request will be cached when the CPU memory access request is executed; if the memory access request If it is a memory access request from the GPU, when the memory access request of the GPU is executed, the read and write data of the memory access request or written to the external memory will bypass the cache and directly operate on the external memory. Only w...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a CPU (central processing unit) and GPU (graphic processing unit) on-chip cache sharing method and device. The method includes the steps of caching access and storage requests from CPU / GPU in classification; requesting for arbitration for the cashed access and storage requests different in classification; executing the access and storage requests while subjecting read and write data of the access and storage requests to high-speed cache; enabling the read and write data read or written in an external storage through the access and storage requests to bypass the high-speed cache when the access and storage requests of a GPU are executed, operating the external storage directly, and notifying a CPU core to perform cancellation or update private data backup only when writing hit the target the high-speed cache. The device comprises a CPU request queue, a GPU request queue, an arbiter and a high-speed cache flow execution unit. The method and the device have the advantages that different access features of the CPU and the GPU can be taken into account at the same time, performance is high, hardware implementation is simple and cost is low.

Description

technical field [0001] The invention relates to the field of computer microprocessors, in particular to a method and device for sharing an on-chip cache memory between a CPU and a GPU. Background technique [0002] With the rapid development of VLSI and embedded technology, more and more transistor resources are available on a single chip, and system-on-chip (SoC) technology emerges as the times require. A SoC chip often integrates multiple IP cores with different functions and has relatively complete functions. SoC chips used in handheld terminal machines such as mobile phones and PDAs can integrate almost all functions of an embedded information processing system, and then realize information collection, input, storage, processor, output and other functions on a single chip. . Some current embedded systems (such as mobile phones, game consoles) put forward higher requirements on the performance of multimedia processors such as graphics, images, and videos, so the graphic...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F13/18G06F9/38G06F12/08G06F12/084G06F15/167
Inventor 石伟邓宇郭御风龚锐任巨张明马爱永高正坤窦强童元满
Owner NAT UNIV OF DEFENSE TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products