A Cooperative Scheduling Method for Mixed Data Flows in Cloud Data Center Network

A cloud data center and collaborative scheduling technology, applied in the field of cloud computing, can solve the problem of sacrificing the throughput index of throughput-intensive data flow in the data center network

Active Publication Date: 2021-05-11
SHANGHAI JIAOTONG UNIV
View PDF6 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, Qjump significantly sacrifices throughput metrics for throughput-intensive data flows in data center networks

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Cooperative Scheduling Method for Mixed Data Flows in Cloud Data Center Network
  • A Cooperative Scheduling Method for Mixed Data Flows in Cloud Data Center Network
  • A Cooperative Scheduling Method for Mixed Data Flows in Cloud Data Center Network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0040] The present invention will be described in detail below in conjunction with specific embodiments. The following examples will help those skilled in the art to further understand the present invention, but do not limit the present invention in any form. It should be noted that those skilled in the art can make several changes and improvements without departing from the concept of the present invention. These all belong to the protection scope of the present invention.

[0041] The data flow in the data center can be roughly divided into query flow (2KB to 20KB), delay-sensitive short flow (100KB to 1MB) and throughput-intensive long flow (1MB to 100MB). Compared with long flow, query flow and Short streams are more sensitive to latency, and these streams are usually generated by user interaction operations, such as submitting search queries, obtaining order lists, and so on. The long flow can be downloading a large amount of data files, disk backup and other operations...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The present invention provides a method for collaborative scheduling of mixed data streams in a cloud data center network, including two main aspects: a time-triggered scheduling algorithm for delay-sensitive streams and an event-triggered scheduling algorithm for throughput-intensive streams, wherein, The time-triggered scheduling algorithm ensures that delay-sensitive data streams can be assigned to the transmission time period with the highest transmission priority. The event-triggered scheduling algorithm adopts the non-congestion scheduling principle to make full use of the bandwidth resources of the entire data center, while ensuring delay-sensitive data streams Under the premise of high transmission performance, network transmission bandwidth is allocated for data streams according to the occurrence of throughput-intensive tasks. The invention solves the problem of simultaneously satisfying the requirements for high throughput and low delay in the data center network, and ensures the transmission accuracy of all delay-sensitive data streams.

Description

technical field [0001] The invention relates to cloud computing technology, in particular to a method for collaborative scheduling of mixed data streams in a cloud data center network. Background technique [0002] With the rapid development of cloud computing technology, the scale of data centers and the number of supporting services are increasing exponentially, which promotes the continuous integration of data center infrastructure, supporting systems, resource virtualization, dynamic migration, business information systems, etc. With the continuous development of technology, at the same time, the data center network, which plays a core role in the data center, is also constantly evolving. [0003] The loads of different network applications in the data center are very different, causing problems in different aspects in the actual network. Many distributed systems, such as MapReduce, Hadoop, and TritonSort, have a great demand for network bandwidth resources. Usually, th...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): H04L12/875H04L29/08H04L47/56
CPCH04L47/56H04L47/564H04L67/10
Inventor 姚建国彭博管海兵
Owner SHANGHAI JIAOTONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products