Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

High-performance computing resource scheduling fair sharing method

A technology of high-performance computing and resource scheduling, applied in the field of fair sharing of high-performance computing resource scheduling, can solve problems such as insufficient "fairness"

Inactive Publication Date: 2021-09-21
BEIJING SKYCLOUD RONGCHUANG SOFTWARE TECH
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0020] 3. Existing fair sharing algorithm
[0030] Many fair-sharing scheduling algorithms (taking SLURM as an example) only consider the current resource usage of the current shared tree nodes, which is not "fair" enough

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • High-performance computing resource scheduling fair sharing method
  • High-performance computing resource scheduling fair sharing method
  • High-performance computing resource scheduling fair sharing method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0062] Such as figure 2 As shown, the high-performance high-performance computing resource scheduling fair sharing method provided by the present invention includes the following steps:

[0063] S1: Data structure initialization: convert the configured fair share structure into a tree data structure, calculate the static quota of each leaf, set the dynamic quota to the static quota, and set up the sub-queue of the leaf;

[0064] The fair share structure is described in figure 1 . All the following descriptions take the structure in this figure as an example.

[0065] S11: Calculate the global sharing quota of each leaf of the fair sharing tree, define the cluster-level quota as 1, and calculate the sharing quota of each leaf from top to bottom. exist figure 1 In the example, department 1 is 0.6667, department 2 is 0.3333, user 3 is 0.2222, user 4 is 0.4445, item 1 is 0.1333, item 2 is 0.2, user 1 is 0.0667, and user 2 is 0.0333. The quota of each unit in the last leaf pa...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

According to the high-performance computing resource scheduling fair sharing method provided by the invention, more fair resource sharing is realized by calculating the weight of historical resources used in the dynamic quota, and the user service quality of a high-performance computing system is improved. The leaves to which the tasks belong are searched through the hash table, and the sorting of the shared tree nodes is optimized after each task is successfully scheduled, so that the scheduling speed is improved, and the high throughput of the system is realized.

Description

technical field [0001] The invention relates to a high-performance computing resource and a task scheduling method, in particular to a distributed computing-based high-performance computing resource scheduling fair sharing method. Background technique [0002] The following is an introduction to the relevant fields: [0003] 1. High performance computing and big data task scheduling system [0004] Both high-performance computing and big data belong to distributed computing systems, that is, the entire system is composed of multiple servers to form a cluster, and computing and data tasks are distributed to each server to run. [0005] Resource and task scheduling system is the key technology of distributed computing system. The user's computing tasks are run through the resource and task scheduling system, rather than by directly accessing a certain server. [0006] A task is a period of computation that has a beginning and an end. Users submit multiple tasks to the queu...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F9/48
CPCG06F9/4881
Inventor 陆伟钊
Owner BEIJING SKYCLOUD RONGCHUANG SOFTWARE TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products