Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Shared memory based method for realizing multiprocess GPU (Graphics Processing Unit) sharing

A shared memory, multi-process technology, applied in the field of sharing GPU among multiple processes based on shared memory for data communication, can solve the problem of not being able to share the use of GPU, etc.

Inactive Publication Date: 2012-01-18
NAT UNIV OF DEFENSE TECH
View PDF4 Cites 15 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The technical problem to be solved by the present invention is: Aiming at the situation that the multiple processes of the SPMD program on a single heterogeneous computing node cannot share the use of the GPU, a method for data communication based on shared memory is proposed to realize the shared use of multiple processes GPU

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Shared memory based method for realizing multiprocess GPU (Graphics Processing Unit) sharing
  • Shared memory based method for realizing multiprocess GPU (Graphics Processing Unit) sharing
  • Shared memory based method for realizing multiprocess GPU (Graphics Processing Unit) sharing

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0027] specific implementation plan

[0028] image 3 It is a schematic diagram of GPU shared by multiple processes in the present invention.

[0029] Two GPU clients and one GPU server run on the computing nodes. Each GPU client allocates its own memory space, which is identified by the process number pid of the GPU client. When the GPU client uses the GPU, it sends a user signal, and the user signal enters the signal queue. The GPU server responds to the user signal in the signal queue, enters the signal processing function, and uses the GPU for accelerated calculation.

[0030] Figure 4 It is an overall flow chart of the present invention.

[0031] In order to test the effect of the present invention, the School of Computer Science of the National University of Defense Technology conducted an experimental verification on a single heterogeneous computing node of CPU+GPU. The specific configuration of the node is as follows: two Intel Xeon 5670 six-core CPUs, each with ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a shared memory based method for realizing multiprocess GPU (Graphics Processing Unit) sharing, aiming to solve the problem that multiple processes of an SPMD (Single Program Multiple Data) program on a single heterogeneous computing node cannot share the GPU. In a technical scheme, the method comprises the steps: a GPU server side is started and waits for the arrival of request signals of a GPU client side; the GPU client side is started and sends request signals to the GPU server side when GPU acceleration is needed; and GPU server side responds to the request signalwhich is received at first, performing accelerated computing by using a GPU, and simultaneously executes a computing process of a CPU (Central Processing Unit). In a GPU computing process, the request signals from the GPU client side enters a signal queue in an operating system for queuing; and after GPU computing is completed, the GPU sever side responds to other request signals in the signal queue. On the single heterogeneous computing node containing one GPU, the method in the invention can be used to ensure that operation faults cannot be generated when a plurality of processes, in need of GPU acceleration, of the SPMD program run, and the GPU can be shared by the plurality of processes.

Description

technical field [0001] The invention relates to a method for sharing a Graphics Processing Unit (GPU), in particular to a method for sharing the GPU between multiple processes based on shared memory for data communication. Background technique [0002] In recent years, with the continuous development of GPU hardware technology and programming models, more and more attention has been paid to the powerful computing capabilities of GPUs. A large number of scientific computing programs use GPUs to accelerate their key code segments and obtain good speedup ratios. The tasks of the scientific computing program using the GPU are: initialize the GPU, prepare data for the GPU, calculate on the GPU, write back the calculation results to the GPU, and release the GPU. [0003] However, existing GPUs do not support simultaneous access by multiple processes. After a process initializes the GPU, other processes cannot use the GPU until the GPU is released. The parallel program of SPMD (S...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F15/167G06F9/50
Inventor 杜云飞杨灿群易会战王锋黄春赵克佳陈娟李春江左克彭林
Owner NAT UNIV OF DEFENSE TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products