Unlock instant, AI-driven research and patent intelligence for your innovation.

Latency Preemption Technique for Scheduling Graphics Processing Unit Command Streams

A graphics processing unit, graphics processing technology, applied in the direction of electrical digital data processing, processor architecture/configuration, multi-programming devices, etc.

Active Publication Date: 2018-04-10
QUALCOMM INC
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Arbitration of GPU resources between different applications currently executing on the host CPU can present a significant challenge to the host CPU, especially if a particular application may require high priority access to the GPU

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Latency Preemption Technique for Scheduling Graphics Processing Unit Command Streams
  • Latency Preemption Technique for Scheduling Graphics Processing Unit Command Streams
  • Latency Preemption Technique for Scheduling Graphics Processing Unit Command Streams

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0024] The present invention is directed to a latency preemptive technique for scheduling a stream of GPU commands for execution on a graphics processing unit (GPU). Today, GPUs are increasingly used for user interface (UI) rendering. There is often a need to service the UI command flow in a timely manner in order to achieve the proper visual effects and responsiveness that the user expects in the UI. While a high-priority UI command stream is queued by the host CPU for execution on the GPU, the GPU can execute another queued command stream associated with a different background with lower priority, for example, a non-UI graphics background or using the GPU to execute general-purpose Background of computing tasks (ie, Graphics Processing Unit General Computing (GPGPU) tasks). In some cases, waiting for a lower priority background to complete execution before executing a higher priority UI command flow may not result in an acceptable user experience with the UI.

[0025] One ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention is directed to a latency preemptive technique for scheduling a stream of GPU commands for execution on a graphics processing unit (GPU). A host CPU configured to control a GPU to perform delayed preemptive scheduling is described. For example, the host CPU may, in response to receiving a preemption notification, select one or more locations in the GPU command stream as one or more locations at which preemption is allowed to occur, and may based on the selected one or more locations Multiple locations place one or more tokens in the GPU command stream. The token may indicate to the GPU that preemption is allowed to occur at the selected one or more locations. This disclosure further describes a GPU configured to preempt execution of a GPU command stream based on one or more tokens placed in the GPU command stream.

Description

technical field [0001] The present invention relates to graphics processing systems, and more particularly, to graphics processing systems utilizing command streams. Background technique [0002] Computing devices often utilize graphics processing units (GPUs) to speed up rendering of graphics data for display. Such computing devices may include, for example, computer workstations, mobile telephones such as so-called smartphones, embedded systems, personal computers, tablet computers, and video game consoles. GPUs typically execute a graphics processing pipeline that includes multiple processing stages that operate together to execute graphics processing commands. A host central processing unit (CPU) can control the operation of the GPU by issuing one or more graphics processing commands to the GPU. Modern CPUs are often capable of executing multiple applications concurrently, each of which may require utilization of the GPU during execution. Arbitration of GPU resources ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T1/20G06F9/48
CPCG06F9/4806G06F9/461G06F9/4812G06T1/20
Inventor 埃杜瓦杜斯·A·梅茨奈杰尔·特伦斯·普尔科林·克里斯托弗·夏普安德鲁·格鲁伯
Owner QUALCOMM INC