Unlock instant, AI-driven research and patent intelligence for your innovation.

Distributed block storage performance optimization method based on ALUA and local cache

An optimization method and local caching technology, applied in the input/output process of data processing, instruments, electrical and digital data processing, etc., can solve the problems of shortening the read path, shortening the write cache path, and improving the read and write speed, so as to achieve a balanced cache. Utilization and performance improvement

Active Publication Date: 2020-05-22
深圳创新科软件技术有限公司
View PDF10 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Because of the addition of the local cache layer, the write cache path is shortened, and the client perceives the write to be faster. Because the read cache is added, some reads can be directly hit and returned from the cache, the read path is shortened, and the read and write speeds are improved.
This is also one of the main reasons why the performance of distributed storage is not as good as that of stand-alone storage under the same configuration.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Distributed block storage performance optimization method based on ALUA and local cache
  • Distributed block storage performance optimization method based on ALUA and local cache
  • Distributed block storage performance optimization method based on ALUA and local cache

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0018] The present invention will be further described below in conjunction with accompanying drawing:

[0019] like image 3 Shown: the present invention adopts ALUA multi-path mode, and the connection priority of client in described ALUA multi-path mode is divided into AO (Active / Optimize) and AN (Active / Non-optimized), and described client distinguishes among multiple paths AO and AN, priority to complete the IO request through the AO path, including step S1, step S2 and step S3:

[0020] Step S1: Add a local cache layer under the Target layer, add a cluster cache balancer under the local cache layer, the local cache layer is used for stand-alone storage, and the cluster cache balancer is responsible for collecting the cache usage details of each node in real time, and Balance the cache utilization of each node as much as possible;

[0021] Step S2: Define the cache utilization rate of each node as Hcr, the cache occupancy of each Target on the node is Tcr, then the cache...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a distributed block storage performance optimization method based on ALUA and a local cache. An ALUA multi-path mode is adopted, wherein the connection priorities of the clients in the ALUA multi-path mode are divided into AO and AN; and the client distinguishes AO and AN in multiple paths, IO requests are completed through the AO path preferentially. Compared with the methods in the prior art, the distributed block storage adopts a mode of combining an ALUA multi-path mode and the local cache to improve the performance; and the distributed block storage adopts the ALUAmulti-path mode and a cluster cache equalizer to dynamically adjust the path priority and equalize the cache utilization rate of each node of the cluster. According to the method, the advantages of distributed storage and stand-alone storage are fused so that the IO path has the excellent performance of high reliability and local caching at the same time.

Description

technical field [0001] The invention relates to the technical field of data storage, in particular to a distributed block storage performance optimization method based on ALUA and local cache. Background technique [0002] For distributed block storage, the current mainstream multi-path mode between the client and server is the AA mode. The client establishes iSCSI / FC connections with multiple storage nodes to achieve high path reliability and load balancing. In the AA mode, it is necessary to ensure real-time data synchronization between multiple paths from the client to each Target, so there is no local cache under the Target service. If there is a cache, it will cause data inconsistency between paths, such as figure 1 Shown is the session connection diagram of traditional mainstream distributed block storage; for traditional stand-alone storage, there is a local cache. like figure 2 Shown is a traditional stand-alone storage architecture diagram, and the local cache i...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F3/06
CPCG06F3/0611G06F3/0614G06F3/0608G06F3/0635G06F3/0653G06F3/067
Inventor 董文祥
Owner 深圳创新科软件技术有限公司
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More