Method for sharing GPU (graphics processing unit) by multiple tasks based on CUDA (compute unified device architecture)

An implementation method and multi-task technology, applied in the direction of multi-programming device, resource allocation, etc., can solve the undiscovered problems of multi-task sharing, etc., and achieve the effects of simple multi-task sharing, good performance, and simplified programming work

Inactive Publication Date: 2014-04-02
HUAWEI TECH CO LTD +1
View PDF2 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] At present, no patent or literature has been found to discuss multi-task sharing on GPU

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for sharing GPU (graphics processing unit) by multiple tasks based on CUDA (compute unified device architecture)
  • Method for sharing GPU (graphics processing unit) by multiple tasks based on CUDA (compute unified device architecture)
  • Method for sharing GPU (graphics processing unit) by multiple tasks based on CUDA (compute unified device architecture)

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0059] The present invention will be further described with a specific example below. However, it should be noted that the purpose of the disclosed embodiments is to help further understand the present invention, but those skilled in the art can understand that various replacements and modifications can be made without departing from the spirit and scope of the present invention and the appended claims. It is possible. Therefore, the present invention should not be limited to the content disclosed in the embodiments, and the protection scope of the present invention is subject to the scope defined in the claims.

[0060] A specific example is: 3 computing tasks (the content of the specific tasks is not affected here).

[0061] The tasks have the following constraints: task 1 must be completed after task 0, because task 1 needs to use the results of task 0, and task 2 has no constraint relationship with task 0 and task 1. (Attachment 3(a), circles represent tasks, arrows repr...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a method for sharing a GPU (graphics processing unit) by multiple tasks based on a CUDA (compute unified device architecture). The method includes creating a mapping table in a Global Memory, determining each task number and task block numbers which are executed by a corresponding Block in a corresponding combined Kernel; starting N blocks by one Kernel every time; and meeting constraint relations among the original tasks by a marking and blockage waiting method; and performing sharing by the multiple tasks for a Shared Memory in a pre-application and static distribution mode. The N is equal to the sum of the task block numbers of all the tasks. By the aid of the method, sharing by the multiple tasks can be realized on the existing hardware architecture of the GPU simply and conveniently, programming work in actual application can be simplified, and a good performance is obtained under certain conditions.

Description

technical field [0001] The invention relates to a method for realizing multi-task shared GPU, in particular to a method for merging multiple tasks in the CUDA architecture of NVIDA to realize task parallelism, and belongs to the field of GPGPU computing. Background technique [0002] GPGPU (General-purpose computing on graphics processing units) is a technology that uses GPUs for large-scale computing. CUDA is a GPGPU architecture provided by NVIDA. Since its launch, CUDA has become a widely used form of many-core parallel computing. [0003] GPU has much higher floating-point computing capability and memory bandwidth than CPU (attached figure 1 ), and because of its high degree of parallelism, it is very suitable for large-scale data processing. [0004] However, programming on a GPU is different from parallel programming on a CPU due to the hardware design of the GPU. A significant difference is that the GPU does not support multi-task sharing: each task running on the...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06F9/50
Inventor 黄锟陈一峯蒋吴军
Owner HUAWEI TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products