Mix scheduling method facing central processing unit (CPU) and graphic processing unit (GPU)

A hybrid scheduling and scheduler technology, applied in the field of job scheduling in high-performance computing, can solve problems such as description, and achieve the effect of accurate scheduling

Inactive Publication Date: 2012-06-27
DAWNING INFORMATION IND BEIJING
View PDF3 Cites 14 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The resources managed by the traditional job scheduling system are mostly operating system resources (such as nodes, memory, CPU, etc.), and the GPU is not described as an available resource in the scheduling system and participates in scheduling decisions, so there are often two aspects. Questions: First, how to describe GPU resources and GPU resource requests; second, how to coexist GPU applications with traditional parallel applications (MPI, OpenMP, PThread) to ensure the rational use of system resources

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0023] The purpose of the present invention is to solve the optimal scheduling problem of GPU / CPU jobs under the GPU / CPU heterogeneous cluster environment.

[0024] (1) First, when the scheduler is initialized, parameters such as scheduling policy configuration and priority configuration are read;

[0025] (2) Secondly, the scheduler provides various types of information in the job scheduling system in each scheduling cycle, including job information (such as job status information, resource request information), node information (such as node status information, node configuration information), queue Information (such as queue configuration information, queue status information), etc.

[0026] (3) Again, the scheduler performs priority calculation according to the job resource request information and the scheduling policy, determines the priority of each job and arranges them in descending order;

[0027] In the priority configuration parameter, add the GPU weight parameter ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a mix scheduling method facing a central processing unit (CPU) and a graphic processing unit (GPU), which comprises the steps of S1 reading scheduling strategy configuration and priority level configuration parameters when a scheduler is initialized; S2 reading information in a job scheduling system in each scheduling circulation; S3 conducting priority level calculation by the scheduler according to operation resource request information and scheduling strategies, and determining the priority level of each operation and ranking in a descending order; S4 conducting operation scheduling according to the scheduling strategies and obtaining ranking sequences obtained in the S3; S5 sending operation start requests to a scheduling system according to scheduling results of the S4; and S6 keeping dormant for a period time, and then returning S2 to conduct a next circulation. The mix scheduling method sets relative relation between the CPU and a non-GPU through customized weight, CPU resource situation is judged during scheduling, and accurate scheduling of GPU operation is achieved.

Description

technical field [0001] The invention relates to job scheduling in high-performance computing, in particular to a hybrid scheduling method oriented to CPU and GPU. Background technique [0002] In recent years, with the popularity of GPU devices and the migration of applications to GPUs, more and more high-performance computing clusters have begun to use GPU devices to support large GPU applications. This puts forward new requirements for the traditional job scheduling system. The resources managed by the traditional job scheduling system are mostly operating system resources (such as nodes, memory, CPU, etc.), and the GPU is not described as an available resource in the scheduling system and participates in scheduling decisions, so there are often two aspects. Questions: One is how to describe GPU resources and GPU resource requests; the other is how GPU applications coexist with traditional parallel applications (MPI, OpenMP, PThread) to ensure the rational use of system r...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06F9/50
Inventor 张涛李媛梁晓湛温鑫赵欢孙国忠邵宗有
Owner DAWNING INFORMATION IND BEIJING
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products