Supercharge Your Innovation With Domain-Expert AI Agents!

Dynamic load balancing method, system and terminal

A technology of dynamic load and balancing method, applied in resource allocation, program control design, instrument, etc., can solve problems such as low computing performance, no consideration of GPU performance fluctuation, low performance, etc.

Inactive Publication Date: 2019-10-15
CHENGDU UNIV OF INFORMATION TECH +4
View PDF3 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The above method of load balancing is used in a heterogeneous multi-GPU system (heterogeneity means that each GPU device is produced by a different manufacturer or belongs to a different series of products, so each GPU has different computing performance), when the difference in computing performance of multiple GPUs is large When large, the performance of the system depends on the GPU with the lowest performance, which leads to very low overall computing performance, even lower than using only one of the higher-performance GPUs
Moreover, the above-mentioned load balancing belongs to static data allocation. This static load balancing method does not consider the performance fluctuation of the GPU during actual operation, which will lead to wrong data and task allocation modes, which will drastically reduce the computing performance of the entire system.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Dynamic load balancing method, system and terminal
  • Dynamic load balancing method, system and terminal
  • Dynamic load balancing method, system and terminal

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0026] The scheme will be described below in conjunction with the accompanying drawings and specific implementation methods.

[0027] image 3 For a schematic flow diagram of a dynamic load balancing method provided in the embodiment of the present application, see image 3 , the method includes:

[0028] S101. Dynamically determine a data computation amount of each GPU in the heterogeneous multi-GPU system.

[0029] Determine each GPU as an independent computing node, and predict the relative computing power of each computing node through the fuzzy neural network.

[0030] A schematic example, in a heterogeneous multi-GPU system, assuming there are m GPUs, then corresponding to m computing nodes NODE={N 1 , N 2 ,...,N m}, for the original large data block, initially divide a set of unit data blocks of the same size DATA={D 1 ,D 2 ,...,D n}. The purpose of load balancing is to establish a mapping from the unit data block set DATA to the computing device set NODE, and ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a dynamic load balancing method and system and a terminal. The method comprises the steps of dynamically determining a data calculation amount of each GPU in a heterogeneous multi-GPU system; allocating data sets with different lengths to the GPUs with different performances according to the total amount of the data to be processed and the relative computing power of each GPU; determining the current running state and the remaining data volume of each GPU; and when the first GPU finishes processing the allocated data set, if the remaining data processing time of the second GPU is greater than a preset threshold, performing secondary allocation on the remaining data volume. The data set is not completely submitted to the target GPU at one time, therefore, during secondary allocation, the remaining unit data blocks in the data sets on the GPUs which possibly have calculation delay can be flexibly allocated to other GPUs. Data transmission time overlapping and multi-GPU calculation resources can be utilized to the greatest extent, and the parallel calculation performance of the whole system is improved.

Description

technical field [0001] The present application relates to the technical field of big data parallel computing, in particular to a dynamic load balancing method, system and terminal. Background technique [0002] The computing performance of Graphics Processing Units (GPU) has developed rapidly in recent years, and parallel computing represented by GPU has become a research hotspot in big data and high-performance computing. However, limited by the computing power and memory capacity of a single GPU, when faced with large data processing or complex computing tasks, it is difficult for a single GPU parallel acceleration solution to meet the requirements of real-time processing. As a result, most servers and workstations today are equipped with multiple GPUs. When processing big data, a multi-GPU system is required, that is, a computing task is assigned to multiple GPU nodes, and multiple GPUs share, cooperate, and complete the task in parallel. [0003] like figure 1 Shown i...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F9/50
CPCG06F9/505G06F9/5083
Inventor 张朝龙许源平许志杰黄健
Owner CHENGDU UNIV OF INFORMATION TECH
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More