GPU cluster deep learning task parallelization method, device and electronic equipment

A GPU cluster and deep learning technology, applied in the Internet field, can solve problems such as not being able to make full use of GPU resources, not considering the physical characteristics of resources and the characteristics of the task itself, and affecting the resource utilization of GPU clusters, etc.

Active Publication Date: 2019-11-01
BEIJING UNIV OF POSTS & TELECOMM
View PDF10 Cites 18 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] Although this method realizes the parallelization of deep learning tasks to a certain extent, this method mainly considers the usage of resources and does not consider the physical characteristics of resources and the characteristics of the task itself. It cannot achieve efficient parallelization of deep learning tasks, which will reduce the Execution Efficiency of Deep Learning Workloads
At the same time, this method does not support the fine-grained multi-task allocation of the GPU, and cannot make full use of the GPU resources on the node, which will affect the efficient execution of deep learning tasks, reduce the GPU utilization of the node, and thus affect the resource utilization of the GPU cluster.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • GPU cluster deep learning task parallelization method, device and electronic equipment
  • GPU cluster deep learning task parallelization method, device and electronic equipment
  • GPU cluster deep learning task parallelization method, device and electronic equipment

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0070] The following will clearly and completely describe the technical solutions in the embodiments of the application with reference to the drawings in the embodiments of the application. Apparently, the described embodiments are only some of the embodiments of the application, not all of them. Based on the embodiments in this application, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the scope of protection of this application.

[0071] The embodiment of the present application discloses a GPU cluster deep learning task parallelization method, device, electronic equipment, storage medium, and computer program product including instructions, which will be described respectively below.

[0072] The embodiment of this application provides a GPU cluster deep learning task parallelization method, see figure 1 , figure 1 It is a schematic diagram of the GPU cluster deep learning task parallelization method of the...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The embodiment of the invention provides a GPU cluster deep learning task parallelization method, a GPU cluster deep learning task parallelization device and electronic equipment, and relates to the technical field of internet. The method comprises the following steps: firstly, analyzing the similarity between a to-be-processed deep learning task and each computing node of a GPU cluster; determining a target computing node of the to-be-processed deep learning task in the GPU cluster, reducing the possibility of computing node resource contention, and improving the utilization rate and the execution efficiency of deep learning task system resources; according to the number of GPUs required by the to-be-processed deep learning task, obtaining a to-be-processed deep learning task; dividing the to-be-processed deep learning task into a plurality of target sub-tasks, analyzing the interference level and the communication cost of the target subtask; determining the target GPU of the target sub-task in the target computing node, and avoiding unbalanced resource allocation on the GPU in the computing node. The high parallelization of the deep learning task is realized. The resource utilization rate of the GPU cluster is improved. Meanwhile, the execution efficiency of the deep learning task is improved.

Description

technical field [0001] The present application relates to the field of Internet technology, in particular to a parallelization method, device and electronic equipment for GPU cluster deep learning tasks. Background technique [0002] With the deepening of deep learning research, deep learning technology has achieved fruitful results in computer vision, speech recognition, text processing and other fields, bringing great convenience to people's lives. However, the complex structure of the neural network model and the huge amount of data put forward higher requirements for computing power. GPU (Graphic Processing Unit, Image Processor) cluster integrates multiple GPU computing resources, provides powerful and efficient parallel computing capabilities for computing-intensive deep learning tasks, and effectively solves the computing needs of multiple deep learning tasks. [0003] However, when a deep learning task runs on a resource-sharing GPU cloud platform, its execution eff...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06F9/50G06N3/00
CPCG06F9/5027G06N3/006
Inventor 张海涛耿欣马华东
Owner BEIJING UNIV OF POSTS & TELECOMM
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products