A multi-GPU task scheduling method under a virtualization technology

A virtualization technology and task scheduling technology, applied in the field of multi-GPU task scheduling, can solve the problems of uneven load, high communication overhead, low throughput and transmission rate, etc., and achieve the effect of load balancing and scheduling efficiency improvement

Inactive Publication Date: 2019-06-21
PLA STRATEGIC SUPPORT FORCE INFORMATION ENG UNIV PLA SSF IEU
View PDF3 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The CPU+GPU heterogeneous platform is a consistent hardware platform suitable for intensive computing. It has the characteristics of high throughput and low transmission

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A multi-GPU task scheduling method under a virtualization technology
  • A multi-GPU task scheduling method under a virtualization technology
  • A multi-GPU task scheduling method under a virtualization technology

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0040] Such as figure 1 As shown, a multi-GPU task scheduling method under a virtualization technology includes the following steps:

[0041] Step S101: Construct a DAG graph of the application, the DAG graph includes a plurality of task nodes;

[0042] Specifically, the DAG graph of the task is expressed as DAG=[V, E, C, TC, TP], where V represents the task node, E represents the directed edge connecting two task nodes, and C represents the calculation amount of the task node , TC represents the amount of data to be processed by the task node, and TP represents the amount of data generated.

[0043] Step S102: layering the DAG graph by means of topological sorting;

[0044] Step S103: sort the task nodes in each layer in the DAG graph according to the priority of the task nodes;

[0045] Specifically, the priority of the task node is obtained through the priority formula of the task node, and the priority formula of the task node is:

[0046] Priority=Density+AverDown (2)...

Embodiment 2

[0057] Such as figure 2 As shown, another multi-GPU task scheduling method under virtualization technology includes:

[0058] Step S201: building a CPU+GPU hardware model;

[0059] As the performance improvement rate brought by the CPU manufacturing process gradually enters the bottleneck, the advantages of the high throughput of lightweight multi-threaded computing of the GPU are gradually highlighted. Manufacturers combine the logical control capabilities of the CPU with the floating-point computing capabilities of the GPU. Form a heterogeneous collaborative processing platform where the CPU master controls the GPU main operation, and its platform model is as follows: image 3 shown.

[0060] The CPU and GPU are connected through the PCIE bus, and there are two connection methods between multiple GPUs. One is that multiple GPUs are on the same PCIE bus, and the GPUs can directly transmit data through the PCIE bus; the other is the GPU Data transmission needs to be carrie...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to the technical field of task scheduling, and discloses a multi-GPU (Graphics Processing Unit) task scheduling method under a public virtualization technology, which comprises the following steps of: 1, constructing an applied DAG (Distributed Aviation Group) graph which comprises a plurality of task nodes; 2, layering the DAG graph in a topological sorting mode; Step 3, sorting the task nodes of each layer in the DAG graph according to the priorities of the task nodes; 4, calculating the predicted earliest completion time of the target task node on the processor, and mapping the task node to the GPU processor which predicts the earliest completion of the target task; And 5, scheduling the tasks by predicting the GPU processor with the earliest task completion time.The task scheduling efficiency is improved.

Description

technical field [0001] The invention relates to the technical field of task scheduling, in particular to a multi-GPU task scheduling method under virtualization technology. Background technique [0002] With the development of computer, communication and artificial intelligence technology, software definition is playing an increasingly important role in various industries, from software-defined radio, software-defined radar to software-defined network, software-defined storage, software-defined data center, The traditional relationship between hardware and software, which is dominated by hardware and supplemented by software, is gradually being reversed. The virtualization of hardware resources and the separation of software and hardware have begun to play an important role in various fields. With the rapid development of GPGPU technology, the CPU+GPU heterogeneous platform has become an important development booster for high-performance heterogeneous platforms due to its ex...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06F9/48G06F9/50
Inventor 王学成马金全岳春生彭华胡泽明王雅琪杨迪
Owner PLA STRATEGIC SUPPORT FORCE INFORMATION ENG UNIV PLA SSF IEU
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products