Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Internal concurrent I/O scheduling method and system for partitions of data server side

A technology of a data server and a scheduling method, applied in the computer field, can solve problems such as unfriendly performance of execution units

Active Publication Date: 2020-02-25
敏博科技(武汉)有限公司
View PDF6 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

But for each execution unit, for the sake of data consistency, all requests are executed in the request queue in the form of FIFO. This execution method does solve the problem of data consistency at the IO scheduling level. However, it is not friendly to the performance of each execution unit, because there may be no read-write correlation or write-write correlation between different requests within the same execution unit, and these requests can be scheduled for execution earlier without waiting for the first execution. It is scheduled for execution after the request it arrives has been executed

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Internal concurrent I/O scheduling method and system for partitions of data server side
  • Internal concurrent I/O scheduling method and system for partitions of data server side
  • Internal concurrent I/O scheduling method and system for partitions of data server side

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0054] The specific embodiments of the present invention will be described in detail below with reference to the accompanying drawings.

[0055] figure 1 A schematic diagram of processing IO requests received from the network for a data server in a storage system in the prior art. The data server 101 receives the network IO request 102, is distributed by the IO request distributor 103, and enters the corresponding request queue 104. In order to ensure the consistency of the data, the IO request distributor 103 will ensure that the information about the same file or the same file The IO requests of the same fragment enter the same request queue, and the requests 105 in the request queue are organized in a FIFO manner. Each request queue corresponds to a request execution unit 106, and the request execution unit 106 processes the requests one by one in a FIFO manner. Requests in the queue.

[0056] figure 2 A schematic diagram showing the performance problems of the existing storag...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides an internal concurrent I / O scheduling method and system for partitions of a data server side. The method includes the steps of: receiving I / O requests, and detecting whether a read request or a write request in the current I / O request conflicts with a previous request in the request conflict queue or the request execution queue or not, adding the conflict read request or write request into the corresponding request conflict queue, and adding the conflict-free read request or write request into the corresponding request execution queue. According to the method, meaningless waiting between requests without read-write conflicts or write conflicts is avoided, the concurrency and response time delay of IO in the partition of the data server side are effectively improved,the IO performance of the data server is improved, and then the overall IO performance of the storage system is improved.

Description

Technical field [0001] The present invention relates to the field of computers, in particular to a method and system for concurrent I / O scheduling within a partition of a data server. Background technique [0002] For a storage system, the final data read and write all occur on the data server, so the performance of the data server is of vital importance to the performance of the entire storage system. The data server is shared by all clients in the storage system. Therefore, the data server will receive requests from different clients. These requests may be about the same byte range of the same file or about the same Different byte ranges of files may also be related to the respective byte ranges of different files. In order to improve the concurrency of data server processing, the received requests are usually partitioned on the data server side according to certain rules, and an execution unit is bound to each partition. This execution unit will execute the requests belonging...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F9/48
CPCG06F9/4881Y02D10/00
Inventor 肖飞游成毅
Owner 敏博科技(武汉)有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products