Cluster GPU (graphic processing unit) resource scheduling system and method

A resource scheduling and resource scheduling module technology, applied in resource allocation, multi-programming devices, etc., can solve problems such as low efficiency, a single GPU cannot carry complex computing tasks, and GPU cards cannot be plugged and played.

Active Publication Date: 2012-07-04
XIAMEN MEIYA PICO INFORMATION
View PDF3 Cites 88 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] In view of this, the present invention provides a cluster GPU resource scheduling system and method to solve the problem that the existing single GPU cannot carry complex computing tasks, and the existing cluster GPU resource scheduling method is not efficient, and the GPU cards in the cluster Unable to plug and play problem

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Cluster GPU (graphic processing unit) resource scheduling system and method
  • Cluster GPU (graphic processing unit) resource scheduling system and method
  • Cluster GPU (graphic processing unit) resource scheduling system and method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0039] In order to solve the problems in the prior art, embodiments of the present invention provide a cluster GPU resource scheduling system and method. The solution provided by the present invention combines all GPU resources into a cluster, and the master node uniformly schedules each child node in the cluster. The child node only needs to set a unique ID number and computing power, and send its own information to the master node. The master node classifies its GPU resources according to the received information of each byte point; for the input task, the master node will After the task is basically divided, it is distributed to each sub-node, and each scheduled sub-node further divides the sub-task into fine blocks to match the parallel computing mode of the GPU.

[0040] The embodiments of the present invention will be described in detail below with reference to the drawings.

[0041] figure 1 It is a schematic structural diagram of a cluster GPU resource scheduling system pro...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a cluster GPU resource scheduling system. The system comprises a cluster initialization module, a GPU master node, and a plurality of GPU child nodes, wherein the cluster initialization module is used for initializing the GPU master node and the plurality of GPU child nodes; the GPU master node is used for receiving a task inputted by a user, dividing the task into a plurality of sub-tasks, and allocating the plurality of sub-tasks to the plurality of GPU child nodes by scheduling the plurality of GPU child nodes; and the GPU child nodes are used for executing the sub-tasks and returning the task execution result to the GPU master node. The cluster GPU resource scheduling system and method provided by the invention can fully utilize the GPU resources so as to execute a plurality of computation tasks in parallel. In addition, the method can also achieve plug and play function of each child node GPU in the cluster.

Description

Technical field [0001] The invention relates to the technical field of computer networks, in particular to a cluster GPU resource scheduling system and method. Background technique [0002] In recent years, the graphics processor (Graphic Processing Unit, GPU) has achieved sustained rapid development in hardware architecture, and has evolved into a highly parallel, multi-threaded and multi-processing core processor with powerful computing capabilities. It uses different The Single Instruction Multiple Thread (SIMT) architecture of the central processing unit (Central Processing Unit, CPU) increases the flexibility of programming. GPU is dedicated to solving problems that can be expressed as data parallel computing, that is, most data elements have the same data path, and have a very high calculation density (ratio of mathematical operations to memory operations), which can hide memory access delays. With its powerful computing power, GPU parallel technology has launched a strong...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06F9/46G06F9/50
Inventor 汤伟宾吴鸿伟罗佳
Owner XIAMEN MEIYA PICO INFORMATION
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products