Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Resource scheduling method and device, and storage medium

A resource scheduling and computer equipment technology, applied in the field of high-performance computing, can solve problems such as inability to efficiently schedule resources, inability to effectively evaluate GPU acceleration effects, etc.

Inactive Publication Date: 2020-06-05
TRANSWARP INFORMATION TECH SHANGHAI
View PDF9 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] In the existing technology, whether the GPU has an acceleration effect in a distributed heterogeneous computing environment is usually measured and evaluated based on human experience combined with the results of the calculation after code migration, so it is impossible to effectively evaluate the acceleration effect of the GPU before computing. Evaluate and determine the running hardware of computing operators, so resources cannot be efficiently scheduled

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Resource scheduling method and device, and storage medium
  • Resource scheduling method and device, and storage medium
  • Resource scheduling method and device, and storage medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0049] figure 1 It is a flow chart of a resource scheduling method provided by Embodiment 1 of the present invention. This embodiment is applicable to the case where a computing operator is determined to run on a CPU or GPU in high-performance computing. The method can be executed by a resource scheduling device. The device can be realized by software and / or hardware, and the device can be integrated in the processor, such as figure 1 As shown, the method specifically includes:

[0050] Step 110, acquiring computing operators to be run in the distributed computing cluster.

[0051] In the embodiment of the present invention, in the distributed computing cluster, the high-performance computing server adopts the CPU specification of Inter E5 2600, the GPU specification of NVIDIA GTX 1080Ti, the motherboard adopts the PCI-E 3.0 specification, and the GPU uses the PCIE8X slot as an example , to illustrate the calculation example of the GPU acceleration ratio, but the embodiment ...

Embodiment 2

[0090] image 3 It is a schematic structural diagram of a resource scheduling device provided in Embodiment 2 of the present invention. combine image 3 , the device includes: a calculation operator acquisition module 310 , a distributed CPU calculation total time consumption calculation module 320 , a distributed GPU calculation total time consumption calculation module 330 , a GPU acceleration ratio calculation module 340 and a calculation operator operation hardware determination module 350 .

[0091] Wherein, the calculation operator acquiring module 310 is used to acquire the calculation operator to be run in the distributed computing cluster;

[0092] The distributed CPU computing total time-consuming calculation module 320 is used to calculate the distributed CPU matching the computing operator according to the characteristic parameters of the CPU cluster matching the distributed computing cluster, the preset stand-alone input and output data volume, and the computing ...

Embodiment 3

[0103] Figure 4 It is a schematic structural diagram of a computer device provided in Embodiment 3 of the present invention, as shown in Figure 4 As shown, the equipment includes:

[0104] one or more processors 410, Figure 4 Take a processor 410 as an example;

[0105] memory 420;

[0106] The device may also include: an input device 430 and an output device 440 .

[0107] The processor 410, memory 420, input device 430, and output device 440 in the device can be connected by bus or other means, Figure 4 Take connection via bus as an example.

[0108] The memory 420, as a non-transitory computer-readable storage medium, can be used to store software programs, computer-executable programs, and modules, such as program instructions / modules corresponding to a resource scheduling method in an embodiment of the present invention (for example, attached image 3The calculation operator acquisition module 310 shown, the distributed CPU calculation total time consumption cal...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The embodiment of the invention discloses a resource scheduling method and device and a storage medium. The method comprises the steps of obtaining a to-be-operated calculation operator in a distributed calculation cluster, according to the CPU cluster characteristic parameters matched with the distributed calculation cluster, the preset single-machine input and output data volume and the calculation category of the calculation operator, calculating the total calculation time consumption of the distributed CPU, calculating the total calculation time consumption of the distributed GPU accordingto the GPU single-machine characteristic parameters, the GPU cluster characteristic parameters, the single-machine input and output data volume and the calculation category which are matched with thedistributed calculation cluster, calculating a GPU speed-up ratio according to the total calculation time consumption of the distributed heterogeneous CPU and the total calculation time consumption of the distributed heterogeneous GPU, if the GPU speed-up ratio is greater than a preset value, determining to operate the calculation operator in the GPU, otherwise, determining to operate the calculation operator in the CPU. The speed-up effect of the GPU can be quickly and automatically evaluated before algorithm calculation, the running hardware of a calculation operator is determined, and resources are efficiently scheduled.

Description

technical field [0001] Embodiments of the present invention relate to the technical field of high-performance computing, and in particular, to a resource scheduling method, device, and storage medium. Background technique [0002] With the development of high-performance computing technology, in addition to the central processing unit (Central Processing Unit, CPU), computing equipment also has more and more co-processors, such as graphics processing units (Graphics Processing Unit, GPU), field programmable Logic gate array (Field Programmable Gate Array, FPGA), embedded accelerator card, etc. These coprocessors can accelerate traditional CPU-based computing programs and improve the overall computing performance of the business system. [0003] Ideally, GPU acceleration can be used to accelerate both single-machine computing and multi-machine distributed computing. However, in a distributed cluster environment, when the algorithm is migrated from the CPU to the GPU, the act...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F9/50
CPCG06F9/5027G06F9/5072
Inventor 张燕夏正勋
Owner TRANSWARP INFORMATION TECH SHANGHAI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products