Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-device mirror images and stripe function-providing disk cache method, device, and system

A device and disk technology, which is applied in the direction of memory system, response error generation, memory address/allocation/relocation, etc., can solve the problem that data cannot be distributed to multiple physical or logical cache devices, so as to improve data reliability and increase Mirror copies, performance-enhancing effects

Active Publication Date: 2012-10-03
HUAWEI TECH CO LTD
View PDF3 Cites 26 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] However, this existing technology cannot dynamically distribute data to multiple physical or logical cache devices to achieve parallel read and write of cached data

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-device mirror images and stripe function-providing disk cache method, device, and system
  • Multi-device mirror images and stripe function-providing disk cache method, device, and system
  • Multi-device mirror images and stripe function-providing disk cache method, device, and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0036] The technical solution of the present invention realizes the mirroring and striping functions of the cache data inside the cache by improving the cache device management mode of the disk cache and the address mapping method of the cache data block, so that the user can adjust the cache data according to the different characteristics of the cache data block or the user's needs. Perform differentiated mirror and stripe configurations to achieve the optimal combination of performance and reliability.

[0037] In one embodiment, the present invention provides a method for processing cached data. The method includes: a cache management module reads cache block information, and the cache block information includes at least two addresses, and the at least two addresses point to at least two different SSD cache devices; the cache management module according to the cache At least two addresses in the block information initiate data read operations to the at least two SSD cache d...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a method, a device, and a system for proceeding IO input and output operation on multiple cache devices. The method comprises the steps that a cache management module reads cache block information which comprises at least two addresses, wherein the at least two addresses point to least two different SSD cache devices; and the cache management module initiates data-reading operation on the at least two SSD cache devices according to the at least two addresses in the cache block information. The invention embodiments realize multiple-device parallel processing required by IO by configuring buffer block stripes in cache devices, and reading performance of cache blocks is improved; and data reliability is improved by adding mirror image copies to a dirty cache block.

Description

technical field [0001] The present invention relates to data caching. Background technique [0002] As we all know, the input-output (IO) speed between disk and memory has always been an important bottleneck of system performance. In IO-intensive application scenarios, the CPU often needs to wait for disk IO. At present, many systems use memory or similar volatile media as disk cache to improve IO speed. However, the price of the memory is relatively expensive, and the data will be lost after the power is turned off. According to the locality principle of the program, a high-speed memory with a capacity between the disk and the memory can be set between the main memory and the disk, and a part of instructions or data near the address of the instruction being executed can be transferred from the disk to this memory, which greatly improves the performance of the program. Shorten the time the CPU waits for data. This has a great effect on improving the running speed of the ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F3/06G06F13/16
CPCG06F11/20G06F11/1666G06F12/02G06F12/0895G06F11/14
Inventor 秦岭温正湖章晓峰
Owner HUAWEI TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products