Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A quality-of-service-aware cache scheduling method based on feedback and fair queues

A technology of quality of service and scheduling method, which is applied in the field of computer system structure cache scheduling, and can solve the problems of inability to provide service quality, weak data correlation, and overall local load reduction.

Active Publication Date: 2022-04-15
ZHEJIANG LAB
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Therefore, effective quality of service guarantees cannot be provided; on the other hand, because multiple applications access concurrently, an application access sequence may insert access requests from other applications at any time, but the data correlation between different applications is weak, which makes the load reduced overall locality, especially for applications with low data request arrival rates

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A quality-of-service-aware cache scheduling method based on feedback and fair queues
  • A quality-of-service-aware cache scheduling method based on feedback and fair queues
  • A quality-of-service-aware cache scheduling method based on feedback and fair queues

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0047] In order to make the purpose, technical solution and technical effect of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings.

[0048] The present invention proposes a quality-of-service-aware cache scheduling method based on a feedback structure and a fair queue. Using a cache partition strategy, the cache is divided into multiple logical partitions, each application corresponds to a cache logical partition, and dynamically changes according to the load of the application. Adjust the size of its logical partition, and an application can only access the corresponding logical partition. The present invention mainly adopts six modules: service quality measurement strategy, starting time fair queue, feedback-based cache partition management module, cache block allocation management module, cache elimination strategy monitoring module, and cache compression monitoring module. The servic...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention provides a quality-of-service-aware cache scheduling method based on feedback and fair queues, using a quality-of-service measurement strategy to index the quality of service of different similar applications, and using start-time fair queues to set different start service times to control different applications Request service sequence, use the feedback-based cache partition management module to divide all logical partitions into two types: providing partitions and receiving partitions, and adjust the cache allocation between the two types of logical partitions, and balance the whole through the cache block allocation management module The performance and guaranteed service quality, and the cache elimination strategy monitoring module monitor the current cache elimination strategy efficiency of each logical partition, and make dynamic adjustments according to the load characteristics of the application, and use the cache compression monitoring module to capture applications with poor locality, that is, existing The application of the cache hit rate long tail phenomenon. The invention can take into account the overall cache efficiency and the service quality guarantee between applications.

Description

technical field [0001] The invention belongs to the field of cache scheduling of computer system structures, and relates to a cache scheduling method based on feedback and fair queue quality of service perception. Background technique [0002] In recent years, traditional caching algorithms mainly focus on how to improve the cache hit rate. The basic method is to cache the cache blocks that are most likely to be accessed according to the principle of storage access locality. The current storage system is becoming more and more integrated. Not only the number of applications is increasing, but also the types of applications are becoming more and more complex. Different applications often have large differences in load characteristics and access rules. For example, email servers have more random access And there are fewer repeated visits, although the website server has more random visits, some hot web pages have more repeated visits, and video servers have more sequential vis...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F9/48G06F9/50
CPCG06F9/4881G06F9/5038G06F9/505G06F2209/484G06F2209/5021
Inventor 李勇曾令仿陈光
Owner ZHEJIANG LAB
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products