Unlock instant, AI-driven research and patent intelligence for your innovation.

Resource management method and device for distributed machine learning tasks

A machine learning and resource management technology, applied in the field of resource management of distributed machine learning tasks, can solve problems such as memory allocation and performance impact are not considered

Pending Publication Date: 2021-03-09
SHENZHEN INST OF ADVANCED TECH CHINESE ACAD OF SCI
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] Embodiments of the present invention provide a resource management method and device for distributed machine learning tasks, to at least solve the technical problem that existing resource management systems do not consider the impact of different cache modes on memory allocation and performance

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Resource management method and device for distributed machine learning tasks
  • Resource management method and device for distributed machine learning tasks
  • Resource management method and device for distributed machine learning tasks

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0030] According to an embodiment of the present invention, a resource management method for distributed machine learning tasks is provided, see figure 1 , including the following steps:

[0031] S101: The user submits a machine learning task, which includes two aspects of information, one is the size of the data set, and the other is the number of containers;

[0032] S102: the prediction model calculates the allocation size of the memory according to the size of the data set and the number of containers, and selects a corresponding cache mode;

[0033] S103: Divide the memory allocation into two cases according to the selection of the cache mode. When the memory is sufficient, select the optimal performance model; when the memory is insufficient, select the optimal resource utilization model.

[0034] The resource management method for distributed machine learning tasks in the embodiments of the present invention saves resources and improves task performance through memory ...

Embodiment 2

[0051] According to another embodiment of the present invention, a resource management device for distributed machine learning tasks is provided, see Figure 4 ,include:

[0052] The submission unit 201 is used for users to submit machine learning tasks, which include two aspects of information, one is the size of the data set, and the other is the number of containers;

[0053] The cache mode selection unit 202 is used for the prediction model to calculate the allocation size of the memory according to the data set size and the number of containers, and select the corresponding cache mode at the same time;

[0054] The memory allocation unit 203 is configured to divide the memory allocation into two situations according to the selection of the cache mode. When the memory is sufficient, the optimal performance model is selected; when the memory is insufficient, the optimal resource utilization model is selected.

[0055] The resource management device for distributed machine ...

Embodiment 3

[0066] A storage medium stores program files capable of implementing any one of the resource management methods for the above-mentioned distributed machine learning tasks.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to the field of machine learning tasks, in particular to a resource management method and device for a distributed machine learning task. According to the method and device, a machine learning task is submitted to a user, and the task comprises two aspects of information, namely the data set size and the container number; the prediction model calculates the allocation size ofthe memory according to the size of the data set and the number of the containers, and selects a corresponding cache mode; memory allocation is divided into two conditions according to selection of acache mode, and an optimal performance model is selected when the memory is enough; and when the memory is insufficient, an optimal resource utilization rate model is selected. According to the method, the characteristics of distributed machine learning and the resource management condition of a computing framework are mainly analyzed, a memory prediction and cache mode selection model is constructed according to the analysis, no extra application portrait is needed, and memory allocation and cache mode selection are directly carried out on a new machine learning task.

Description

technical field [0001] The present invention relates to the field of machine learning tasks, in particular to a resource management method and device for distributed machine learning tasks. Background technique [0002] Before running a distributed machine learning task, users need to pre-allocate resources. The application is dynamic at runtime, and the resource usage is not fixed, and is related to the task itself and the amount of data processed. In addition to needing to allocate memory, users also need to determine the way of caching, and the size of memory allocation is strongly coupled with the cache mode, and at the same time affects the performance of tasks. To reduce the user's burden of memory allocation and cache mode selection, a resource management system for distributed machine learning tasks should be designed. [0003] The Quasar system designed by Delimitrou Christina, Department of Computer Science, Stanford University, etc., runs small data in advance t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F9/50G06F12/02G06N20/00
CPCG06F9/5016G06F12/0253G06N20/00Y02D10/00
Inventor 罗树添叶可江须成忠
Owner SHENZHEN INST OF ADVANCED TECH CHINESE ACAD OF SCI