Resource scheduling implementation method based on energy consumption and QoS collaborative optimization

A resource scheduling and collaborative optimization technology, applied in resource allocation, energy-saving computing, program startup/switching, etc., can solve problems such as unrealistic, slow convergence speed, complicated solution process, etc., and achieve the effect of improving efficiency and optimizing the total time cost

Active Publication Date: 2021-10-19
SOUTH CHINA UNIV OF TECH
View PDF5 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In actual data center scenarios, due to load fluctuations and uncertain factors, the dual goals of energy saving and QoS guarantee become more complicated
Previous optimization methods require all entities (users, cloud tasks, service providers, etc.) to satisfy a single QoS constraint throughout the entire cloud computing scheduling process, which is unrealistic in real cloud computing environments
Moreover, the solution process of these optimization methods is complex, the convergence speed is slow, and it is difficult to meet the real-time scheduling requirements of large-scale cloud computing data centers

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Resource scheduling implementation method based on energy consumption and QoS collaborative optimization
  • Resource scheduling implementation method based on energy consumption and QoS collaborative optimization
  • Resource scheduling implementation method based on energy consumption and QoS collaborative optimization

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0091] A resource scheduling implementation method based on collaborative optimization of energy consumption and QoS, comprising the following steps:

[0092] S1. Construct a cloud task arrival queuing model for multiple virtual machines (VMs) in a cloud computing data center environment;

[0093] Such as figure 1 As shown, the cloud task arrival queuing model is composed of a host queuing model and a virtual machine (VM) queuing model in series, and is used to optimize the relationship between the backlog length of the virtual machine cloud task queue and system energy consumption;

[0094] In the host queuing model, after the cloud task is submitted to the data center, the data center will adopt the load balancing strategy of the least loaded (least loaded) criterion, and assign the cloud task to the host with the least number of unfinished cloud task requests, and thus constitute A queuing model in which the inter-arrival time of cloud tasks is exponentially distributed an...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a resource scheduling implementation method based on energy consumption and QoS collaborative optimization. The method comprises the following steps: constructing a cloud task arrival queuing model of multiple virtual machines in a cloud computing data center environment; extracting QoS (Quality of Service) features of a data center by utilizing a stacked noise reduction automatic encoder technology to obtain a matrix for describing QoS feature information after dimension reduction, and solving the maximum response time of a current virtual machine to perfect constraint conditions of a collaborative optimization objective function; and obtaining a resource scheduling algorithm based on a Lyapunov optimization theory by combining a cloud task arrival queuing model, a collaborative optimization objective function and a Lyapunov optimization method, and realizing resource scheduling based on energy consumption and QoS collaborative optimization by adopting the resource scheduling algorithm. According to the method, energy consumption of the data center is effectively reduced while QoS is guaranteed, and interference of fluctuation of cloud task arrival in a real scene of the cloud computing data center on optimization problem solving is overcome.

Description

technical field [0001] The invention belongs to the field of cloud computing energy-saving scheduling, and in particular relates to a resource scheduling implementation method based on energy consumption and QoS collaborative optimization. Background technique [0002] Cloud computing has long been a popular research project in the global IT field by virtue of its ultra-large-scale service capabilities. With the continuous development of cloud computing technology, more and more data centers have emerged around the world, and the energy consumption generated by infrastructure has also shown an exponential growth trend. Currently, the carbon emissions of the global IT industry account for 3-5% of the total global carbon emissions. According to recent reports, Google data centers consume nearly 300 million watts, while Facebook's data centers consume 60 million watts. Data centers consume more electricity than energy-intensive manufacturing. McKinsey, an international resea...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06F9/455G06F9/48G06F9/50
CPCG06F9/45558G06F9/4881G06F9/5027G06F9/5088G06F2009/4557Y02D10/00
Inventor 刘发贵王彬
Owner SOUTH CHINA UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products