Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A method for flexible scheduling of GPU resources based on heterogeneous application platforms

An application platform and scheduling method technology, applied in the directions of resource allocation, multi-program device, inter-program communication, etc., can solve the problems of inconsistent GPU resource scheduling information, limited to the inside of the platform, and resource occupation conflicts, etc., to achieve maximum effect of chemical utilization

Active Publication Date: 2022-03-29
SHANDONG COMP SCI CENTNAT SUPERCOMP CENT IN JINAN +1
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, this method must be unique to the scheduling system used on the same platform, otherwise it will lead to inconsistent scheduling information for GPU resources, resulting in resource occupation conflicts; the commonly used scheduling platform for physical node-level scheduling is the same as the hardware-level scheduling platform, and all GPU computing nodes belong to It is based on a certain resource management platform and does not involve sharing and multiplexing. Its elastic expansion is limited to the inside of the platform. It does not support heterogeneous resources and heterogeneous application expansion, which is not conducive to improving the overall GPU resource utilization of the platform.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A method for flexible scheduling of GPU resources based on heterogeneous application platforms
  • A method for flexible scheduling of GPU resources based on heterogeneous application platforms
  • A method for flexible scheduling of GPU resources based on heterogeneous application platforms

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0038] The present invention will be further described below in conjunction with the accompanying drawings and embodiments.

[0039] Such as figure 1 As shown, a schematic diagram of general flexible scheduling of GPU resources in the present invention is given. The three application platforms are high-performance computing application platform, cloud computing application platform and container application platform, and their identification IDs are 1, 2, and 3 respectively. At the same time, the platform also has a public GPU node resource pool for flexible scheduling and dynamic scaling. As the core platform for resource elastic scaling, it is mainly composed of configuration management module, elastic scheduling module, resource allocation module, resource recycling module, initialization module and collection module. The specific functions of each module are as follows:

[0040] The configuration management module is used to configure the management platform scheduling i...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The flexible scheduling method for GPU resources based on heterogeneous application platforms of the present invention includes: a). Obtaining GPU resource utilization information; b). Setting trigger thresholds and times; c). Screening and sorting shrinking platform queues; d) .Screen and sort the expansion platform queue; 1).Select the platform to be scaled down; 2).Build a GPU node list; 3).Process the locked state node; 4).Offline the node to be migrated; 5).Add to the resource queue; 6). Judging whether the shrinkage is completed. The GPU resource elastic scheduling method of the present invention can be flexibly adjusted according to the overall platform GPU load, thereby realizing the maximum utilization of platform GPU resources. Resource monitoring, information collection, and delivery of execution operations can meet the rapid and flexible deployment and implementation of cloud computing, big data, artificial intelligence, and high-performance computing scene platforms.

Description

technical field [0001] The present invention relates to a method for flexible scheduling of GPU resources, and more specifically, to a method for flexible scheduling of GPU resources based on a heterogeneous application platform. Background technique [0002] Graphics processing unit (GPU) resources have been increasingly used in the fields of cloud computing, artificial intelligence, and high-performance computing in recent years due to their excellent parallel computing capabilities, higher bandwidth, and main frequency. . At the same time, because the price of GPU resources is generally higher than that of CPU, GPU resources are scarce resources in different computing application scenarios. In order to improve the utilization rate of GPU resources, it is generally realized mainly through resource scheduling. [0003] GPU resource scheduling can generally be divided into task-level scheduling, hardware-level scheduling, and node-level scheduling. The task-level schedulin...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F9/50G06F9/4401G06F9/54
CPCG06F9/5011G06F9/5027G06F9/546G06F9/4403G06F2209/5012G06F2209/548
Inventor 王继彬刘鑫郭莹杨美红
Owner SHANDONG COMP SCI CENTNAT SUPERCOMP CENT IN JINAN
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products