Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Task load scheduling method, device and equipment and readable storage medium

A load scheduling and task technology, applied in the field of artificial intelligence, can solve problems such as low task processing efficiency, waste of computing resources, and inability to fully utilize processor computing resources.

Inactive Publication Date: 2020-11-03
LANGCHAO ELECTRONIC INFORMATION IND CO LTD
View PDF8 Cites 10 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] Therefore, it is necessary to build a hybrid heterogeneous distributed computing system that includes multiple computing architectures at the same time. However, the existing artificial intelligence computing framework only abstracts the artificial intelligence algorithm network model into a computing task graph, and assigns each subtask to Different processors are used for processing, and the load scheduling through task allocation cannot make full use of the computing resources of the processors, resulting in waste of computing resources and low task processing efficiency

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Task load scheduling method, device and equipment and readable storage medium
  • Task load scheduling method, device and equipment and readable storage medium
  • Task load scheduling method, device and equipment and readable storage medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0054] see figure 1 , figure 1 It is an implementation flowchart of the task load scheduling method in the embodiment of the present invention, and the method may include the following steps:

[0055] S101: Analyze the received task load scheduling request to obtain the target task to be scheduled.

[0056] When a deep learning network model training task needs to be processed, a task load scheduling request is sent to the task load scheduling center, and the task load scheduling request includes the target task to be scheduled. The task load scheduling center receives the task load scheduling request, and analyzes the received task load scheduling request to obtain the target task to be scheduled.

[0057] S102: Send the target task to the hybrid heterogeneous distributed computing system.

[0058] Wherein, the hybrid heterogeneous distributed computing system includes multiple computing devices with different computing architectures.

[0059] A hybrid heterogeneous distr...

Embodiment 2

[0065] see figure 2 , figure 2 It is another implementation flowchart of the task load scheduling method in the embodiment of the present invention, and the method may include the following steps:

[0066] S201: Traversing each computing node in the hybrid heterogeneous distributed computing system to obtain a device set composed of computing devices in each computing node.

[0067] The hybrid heterogeneous distributed computing system includes multiple computing nodes, and each computing node includes at least one computing device Device_Num. Each computing node in the hybrid heterogeneous distributed computing system is traversed to obtain a device set composed of computing devices in each computing node.

[0068] S202: Perform a numbering operation on each computing device in the device set to obtain device number information corresponding to each computing device.

[0069] After obtaining the device set composed of each computing device in each computing node, perform...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a task load scheduling method, which comprises the following steps of: analyzing a received task load scheduling request to obtain a target task to be scheduled; sending the target task to a hybrid heterogeneous distributed computing system; wherein the hybrid heterogeneous distributed computing system comprises a plurality of computing devices with different computing architectures, and carrying out load scheduling processing on the target task by utilizing the hybrid heterogeneous distributed computing system according to a pre-established equipment topological structure diagram. By applying the technical scheme provided by the embodiment of the invention, the calculation requirements of a multi-modal artificial intelligence algorithm model are met, efficient calculation cooperation among the calculation devices of different calculation architectures is realized, and the overall performance of the hybrid heterogeneous distributed calculation system is improved. The invention further discloses a task load scheduling device and equipment and a storage medium, which have corresponding technical effects.

Description

technical field [0001] The present invention relates to the technical field of artificial intelligence, in particular to a task load scheduling method, device, equipment and computer-readable storage medium. Background technique [0002] With the continuous development of artificial intelligence technology, the demand for computing power of artificial intelligence (AI, Artificial Intelligence) algorithm models continues to grow, and with the development of multi-modal artificial intelligence, the difference in computing characteristics of artificial intelligence algorithm model training It is constantly expanding. For example, the convolutional neural network (CNN, Convolutional NeuralNetwork) model contains a large number of matrix operations, which is suitable for GPU devices for heterogeneous acceleration; the graph neural model contains a large number of irregular calculations, and is more suitable for customized processors. , including FPGA or ASIC chips; while multi-mo...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F9/48G06F9/50
CPCG06F9/4843G06F9/5083
Inventor 郭振华范宝余王丽赵雅倩
Owner LANGCHAO ELECTRONIC INFORMATION IND CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products