Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Pipelining Computational Resources in General-Purpose Graphics Processing Units

A graphics processing unit and pipeline technology, applied to general-purpose stored program computers, computing, architecture with a single central processing unit, etc.

Active Publication Date: 2017-06-23
QUALCOMM INC
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This limitation extends to 2D and 3D graphics processing that uses parallel processing at each processing stage but requires computational resources to be pipelined between stages

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Pipelining Computational Resources in General-Purpose Graphics Processing Units
  • Pipelining Computational Resources in General-Purpose Graphics Processing Units
  • Pipelining Computational Resources in General-Purpose Graphics Processing Units

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0016] This disclosure describes techniques for extending the architecture of a general-purpose graphics processing unit (GPGPU) with parallel processing units to allow efficient processing of pipeline-based applications. Specifically, the techniques include configuring local memory buffers connected to parallel processing units operating as stages of a processing pipeline to hold data for transfer between the parallel processing units. Local memory buffers allow on-chip, low power, direct data transfer between parallel processing units. The local memory buffers may contain hardware-based data flow control mechanisms to enable data transfers between parallel processing units. In this way, data can be passed directly from one parallel processing unit to the next parallel processing unit in the processing pipeline via local memory buffers, effectively transforming the parallel processing unit into a series of pipeline stages. Local memory buffers can significantly reduce memory...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

This disclosure describes techniques for extending the architecture of a general-purpose graphics processing unit, GPGPU, with parallel processing units to allow efficient processing of pipeline-based applications. The technique includes configuring local memory buffers connected to parallel processing units operating as stages of a processing pipeline to hold data for transfer between the parallel processing units. The local memory buffers allow on-chip, low power, direct data transfer between the parallel processing units. The local memory buffer may include a hardware-based data flow control mechanism to enable transfer of data between the parallel processing units. In this way, data can be passed directly from one parallel processing unit to the next parallel processing unit in the processing pipeline via the local memory buffer, effectively transforming the parallel processing unit into a series of pipeline stages.

Description

technical field [0001] The present invention relates to processing data, and more particularly, to processing data using a general purpose graphics processing unit (GPGPU). Background technique [0002] A general-purpose graphics processing unit (GPGPU) is a generalized version of a graphics processing unit originally designed to handle 2D and 3D graphics. GPGPU extends the high-power parallel processing of GPUs to general data processing applications beyond graphics processing. As one example, a GPU may be configured to process data according to the OpenCL specification, which gives certain applications access to a graphics processing unit for non-graphics computing. The "OpenCL Specification, Version 1.1" was published in June 2010 and is publicly available. [0003] GPGPUs consist of programmable processing units arranged in a highly parallel architecture that does not allow data sharing or synchronization between processing units. Instead, individual processing units ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F9/44G06F15/78G06F15/82
CPCG06F15/17325G06F9/38G06F9/46
Inventor 阿列克谢·V·布尔德安德鲁·格鲁伯亚历山德拉·L·克尔斯蒂奇罗伯特·J·辛普森科林·夏普于春
Owner QUALCOMM INC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products