Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Task scheduling method, device and equipment and computer storage medium

A task scheduling and task technology, applied in the field of deep learning technology, can solve problems such as unreasonable operator task scheduling and insufficient storage resources, and achieve the effect of alleviating insufficient storage resources and reasonable task scheduling.

Active Publication Date: 2021-06-11
BEIJING BAIDU NETCOM SCI & TECH CO LTD
View PDF11 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] However, there are usually some limited storage resources in hardware, such as video memory resources. Once the operator task scheduling is unreasonable, it will face the problem of insufficient storage resources.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Task scheduling method, device and equipment and computer storage medium
  • Task scheduling method, device and equipment and computer storage medium
  • Task scheduling method, device and equipment and computer storage medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0027] figure 1 The flow chart of the main method provided by Embodiment 1 of the present disclosure, in many application scenarios, it is necessary to perform multi-thread parallel scheduling on the device for target tasks, so as to improve computing efficiency. The aforementioned devices may be server devices, computer devices with relatively strong computing capabilities, and the like. The present disclosure can be applied to the above-mentioned devices. Such as figure 1 As shown in , the method may include the following steps:

[0028] In 101, according to the hardware execution cost of each operator task in the target task obtained through pre-analysis, the operator tasks that can be executed concurrently are prioritized.

[0029] The target task can be any computationally intensive task. A typical target task is a training task or an application task within a deep learning framework, that is, a training task or an application task based on a deep learning model.

[...

Embodiment 2

[0038] figure 2 A detailed method flow chart provided for Embodiment 2 of the present disclosure, such as figure 2 As shown in , the method may include the following steps:

[0039] In 201, the hardware occupation information of each operator task in the target task is determined in advance.

[0040] The hardware occupancy information may include newly added hardware resource occupancy, resource recovery after execution, and hardware execution cost.

[0041] This step can be used but not limited to the following two methods:

[0042] The first method: in the compilation stage of the target task, according to the size of the specified input data and the dependencies between the operator tasks in the target task, determine the hardware occupation information of each operator task.

[0043] For example, after a user builds a deep learning model, the size of the input data and output data of each operator task can be deduced in the model compilation stage according to the siz...

Embodiment 3

[0074] image 3 A schematic structural diagram of a task scheduling device provided in Embodiment 3 of the present disclosure. The device may be an application located on the server side, or may also be a functional unit such as a plug-in or a software development kit (Software Development Kit, SDK) in the application located on the server side. Alternatively, it may also be located at a computer terminal, which is not particularly limited in this embodiment of the present invention. Such as image 3 As shown in , the apparatus 300 may include: a sorting unit 301 and a scheduling unit 302 , and may further include a first analyzing unit 303 , a second analyzing unit 304 and a determining unit 305 . The main functions of each component unit are as follows:

[0075] The sorting unit 301 is configured to prioritize the operator tasks that can be executed concurrently according to the hardware execution cost of each operator task in the target task obtained through pre-analysis....

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a task scheduling method and device, equipment and a computer storage medium, and relates to a deep learning technology in the technical field of artificial intelligence. The specific implementation scheme is that the method comprises the steps: according to the hardware execution cost of each operator task in a target task obtained by pre-analysis, performing priority ranking on the operator tasks which can be executed concurrently; according to a priority ranking result, scheduling the operator tasks which can be executed concurrently in sequence, wherein the scheduling comprises the step of determining whether to execute the current to-be-scheduled operator task or not according to the newly added hardware resource quantity required by the current to-be-scheduled operator task and the current available hardware resource quantity of the system. According to the invention, reasonable scheduling of tasks is realized, so that the problem of insufficient storage resources is relieved.

Description

technical field [0001] The present disclosure relates to the field of computer application technology, and in particular to deep learning technology in the field of artificial intelligence technology. Background technique [0002] The deep learning framework is one of the basic technologies for the development of artificial intelligence technology. In the deep learning framework, the training and application of deep learning models require a large number of tasks to complete. With the increase of the amount of calculation, the processing method of concurrent execution gradually appeared. That is, according to the dependencies between operators in the deep learning model, the operator tasks are scheduled concurrently as much as possible. [0003] However, there are usually some limited storage resources in the hardware, such as video memory resources. Once the operator task scheduling is not reasonable, it will face the problem of insufficient storage resources. Contents ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F9/48G06F9/50
CPCG06F9/4881G06F9/5011G06F9/5044G06F9/5022G06F9/5038G06F2209/484G06F2209/5021
Inventor 陈秋良刘红雨蓝翔
Owner BEIJING BAIDU NETCOM SCI & TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products