Dynamic scheduling method and system for collaborative computing between CPU and GPU based on two-level scheduling

A dynamic scheduling and global scheduling technology, applied in the field of distributed computing, can solve problems such as the shortest task completion time, failure to fully utilize the cluster computing capacity, and inconsistent end times of computing nodes, so as to shorten the task processing time and realize pipeline processing. , Guaranteed not to wait for the effect of each other

Active Publication Date: 2019-01-22
NARI TECH CO LTD +4
View PDF6 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This method has obvious shortcomings, so the prediction may not be accurate enough, which will cause the end time of each computing node to be inconsistent, cause some nod

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Dynamic scheduling method and system for collaborative computing between CPU and GPU based on two-level scheduling
  • Dynamic scheduling method and system for collaborative computing between CPU and GPU based on two-level scheduling
  • Dynamic scheduling method and system for collaborative computing between CPU and GPU based on two-level scheduling

Examples

Experimental program
Comparison scheme
Effect test

Example Embodiment

[0058] Example 1

[0059] like figure 1 Shown is the software framework diagram of the dynamic scheduling method of the present invention, figure 1 The global scheduling module can be deployed on any node and adopts active and standby redundancy to ensure reliability; the node scheduling module works on each node; the global scheduling module is responsible for distributing data blocks in the system according to the computing power of each node. Each node has two data storage queues, namely the current processing queue and the data cache queue; the current processing queue stores the data blocks being processed by the current CPU and GPU; the data cache queue stores network transmission to the local pending data block.

[0060] like figure 2 Shown is a schematic diagram of the data distribution of the global scheduling module. figure 2 The global scheduling module in the middle first determines the computing power weight of each node according to the parameters such as t...

Example Embodiment

[0065] Example 2

[0066] Based on the same inventive concept as Embodiment 1, an embodiment of the present invention provides a dynamic scheduling system for collaborative computing between CPU and GPU based on two-level scheduling, including:

[0067] A system-level resource real-time monitoring module; the resource real-time monitoring module monitors the relevant parameters of the CPU and GPU in each node in real time; the relevant parameters include parameters such as the model of the CPU, the main frequency, the number of cores, and the average idle rate, And parameters such as GPU model, number of stream processors, etc.;

[0068] A global scheduling module; the global scheduling module receives the information sent by the resource real-time monitoring module, and estimates the processing capacity of each node in the system, according to the request of the node scheduling module in each node in batches according to the estimated processing of each node Ability to dynam...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a dynamic scheduling method and a system for collaborative computing between a CPU and a GPU based on two-level scheduling. The method includes: forecasting the processing capacity of each node in the system, a global scheduling module dynamically distributing data to each node according to the processing capacity of each node according to the batch number of the request ofthe node scheduling module in each node, when the node scheduling module finds that the data queue used to place the processing data is empty, requesting the next batch of data to be processed from the global scheduling module, and dynamically scheduling the tasks according to the CPU and GPU processing capacity in the node. According to the heterogeneity of system resources, the invention allowsweak nodes to share fewer tasks and strong nodes to process more tasks, which can improve the overall concurrency degree of the CPU/GPU heterogeneous hybrid parallel system and reduce the task completion time.

Description

technical field [0001] The invention belongs to the technical field of distributed computing, and in particular relates to a dynamic scheduling method based on two-level scheduling of CPU and GPU cooperative computing. Background technique [0002] The CPU / GPU heterogeneous hybrid parallel system has become a new type of high-performance computing platform due to its strong computing power, high cost performance and low energy consumption. However, its complex architecture also poses a huge challenge for parallel computing research. In the prior art, research on task scheduling in CPU / GPU heterogeneous hybrid parallel systems generally adopts the prediction of the computing power of various types of hardware or the running time of tasks on various processors, and then performs one-time task allocation. . This method has obvious shortcomings, so the prediction may not be accurate enough, which will cause the end time of each computing node to be inconsistent, cause some node...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06F9/50
CPCG06F9/5044G06F9/5066Y02D10/00
Inventor 高原顾文杰李华东张磊陈泊宇张用顾雯轩陈素红丁雨恒
Owner NARI TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products