Check patentability & draft patents in minutes with Patsnap Eureka AI!

GPU scheduling method and system based on asynchronous data transmission

A scheduling method and asynchronous data technology, applied in the field of GPU scheduling, can solve problems such as high latency and low throughput, and achieve the effect of smoothing delay changes, ensuring throughput and low latency, and expanding delay time

Pending Publication Date: 2021-02-09
INST OF COMPUTING TECH CHINESE ACAD OF SCI
View PDF0 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In order to solve the high latency and low throughput problems faced by deep learning reasoning in the actual production environment, the present invention provides a scheduling method for deep learning reasoning systems, which utilizes concurrency, time delay, and batch size. Relationship, to predict the size of the next batch of jobs, in order to achieve the effect of hiding data transmission, so that the system can shorten the job delay time under the premise of meeting the throughput requirements

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • GPU scheduling method and system based on asynchronous data transmission
  • GPU scheduling method and system based on asynchronous data transmission
  • GPU scheduling method and system based on asynchronous data transmission

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0058] After investigating the characteristics of the GPU, the inventor found that the GPU supports asynchronous execution of data transmission and calculation. It can be seen that during deep learning inference, the CPU-to-GPU data transmission and GPU calculation can be performed asynchronously, that is, the hidden data transmission will greatly reduce the final delay time. At the same time, in the existing methods, deep learning reasoning strives to achieve high throughput and low latency at the same time, but in fact, it is difficult to achieve both at the same time. Therefore, the present invention proposes a quantitative model with concurrency as an independent variable and system throughput and time delay as dependent variables. Based on this model, a scheduling algorithm using two processes to hide data transmission delay is implemented to improve system performance. The present invention can calculate and determine the next batch size according to the batch job infor...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a GPU scheduling method and system based on asynchronous data transmission. During deep learning reasoning, data transmission from the CPU to the GPU and GPU calculation are executed asynchronously, and the final delay time can be greatly shortened. Therefore, the invention provides a quantitative model taking the concurrency as an independent variable and taking the systemthroughput and the time delay as dependent variables. Based on the model, a scheduling algorithm for hiding data transmission delay by utilizing two processes is realized, so that the system performance is improved. According to the method and system, the next batch size can be calculated and determined through the batch job information which is being executed, and GPU data transmission and calculation processes are completely parallel. Meanwhile, the algorithm can match continuously changing concurrency in real time, so that the operation delay is reduced to the maximum extent while the real-time throughput requirement is met.

Description

technical field [0001] The present invention relates to the technical field of GPU scheduling, and in particular to a GPU scheduling method and system based on asynchronous data transmission. Background technique [0002] Deep learning is divided into two processes: training and reasoning. Reasoning is the application of the ability learned by deep learning during training to practice. Therefore, in the actual production environment, more attention is paid to reasoning. In reality, due to the increasing demand for computing power of the deep learning model itself, as well as the rapid growth of the number of users and task submissions of deep learning applications on the product side, the accelerated optimization of deep learning reasoning systems has become one of the current research hotspots. [0003] In recent years, accelerated optimization methods for deep learning inference have begun to develop. Most of the current methods focus on using dedicated hardware to reduce...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F9/50G06N5/04
CPCG06F9/5016G06F9/5027G06N5/04Y02D10/00
Inventor 万晓华赵方圆张法刘新宇
Owner INST OF COMPUTING TECH CHINESE ACAD OF SCI
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More