Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Write buffering

a buffering and data technology, applied in the field of data management in a storage system, can solve the problems of system caching performance, failure to recognize possible improvements brought about by the solid-state drive architecture, and the storage system caching performance decline, so as to maximize concurrent data transfer operations, maximize parallel operations, and increase performance

Inactive Publication Date: 2020-05-14
BITMICRO LLC
View PDF6 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Benefits of technology

[0005]The present invention describes cache management methods for a hybrid storage device having volatile and non-volatile caches. Maximizing concurrent data transfer operations to and from the different cache levels especially to and from flash-based L2 cache results in increased performance over conventional methods. Distributed striping is implemented across the rotational drives maximizing parallel operations on multiple drives. The use of Fastest-To-Fetch and Fastest-To-Flush victim data selection algorithms side-by-side with the LRU algorithm results in further improvements in performance.
[0006]Flow of data to and from the caches and the storage medium is managed using a cache state-based algorithm allowing the firmware application to choose the necessary state transitions that produces the most efficient data flow.

Problems solved by technology

In such systems, the management of the drive cache is clone by the host and the overhead brought about by this contributes to degradation of the caching performance of the storage system.
However, prior solutions that made use of non-volatile memory as cache did not take advantage of the architecture of the non-volatile memories that could have further increased the caching performance of the system.
The storage system does not make any distinction between a rotational drive and a solid-state drive cache thus failing to recognize possible improvements that can be brought about by the architecture of the solid-state drive.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Write buffering
  • Write buffering
  • Write buffering

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0042]Cache line is an unit of cache memory identified by a unique tag. A cache line consists of a number of host logical blocks identified by host logical block addresses (LBAs). Host LBA is the address of a unit of storage as seen by the host system. The size of a host logical block unit depends on the configuration set by the host. The most common size of a host logical block unit is 512 bytes, in which case the host sees storage in units of 512 bytes. The Cache Line Index is the sequential index of the cache line to which a specific LBA is mapped.

[0043]HDD LBA (Hard-Disk Drive LBA) is the address of a unit of storage as seen by the hard disk. In a system with a single drive, there is a one-to-one correspondence between the host LBA and the HOD LBA. In the case of multiple drives, host LBAs are usually distributed across the hard drives to take advantage of concurrent IO operations.

[0044]HDD Stripe is the unit of storage by which data are segmented across the hard drives. For exa...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A hybrid storage system is described having a mixture of different types of storage devices comprising rotational drives, flash devices, SDRAM, and SRAM. The rotational drives are used as the main storage, providing lowest cost per unit of storage memory. Flash memory is used as a higher-level cache for rotational drives. Methods for managing multiple levels of cache for this storage system is provided having a very fast Level 1 cache which consists of volatile memory (SRAM or SDRAM), and a non-volatile Level 2 cache using an array of flash devices. It describes a method of distributing the data across the rotational drives to make caching more efficient. It also describes efficient techniques for flushing data from L1 cache and L2 cache to the rotational drives, taking advantage of concurrent flash devices operations, concurrent rotational drive operations, and maximizing sequential access types in the rotational drives rather than random accesses which are relatively slower. Methods provided here may be extended for systems that have more than two cache levels.

Description

CROSS-REFERENCE(S) TO RELATED APPLICATIONS[0001]This application is a continuation of application Ser. No. 15 / 665,321, filed Jul. 31, 2017 and issuing Oct. 15, 2019 as U.S. Pat. No. 10,445,239, which is a continuation of application Ser. No. 14 / 689,045, filed Apr. 16, 2015 and issued as U.S. Pat. No. 9,734,067 on Aug. 15, 2017, which claims the benefit of and priority to U.S. Provisional App. No. 61 / 980,561, filed Apr. 16, 2014. This U.S. Provisional Application 61 / 980,561 is hereby fully incorporated herein by reference. U.S. application Ser. No. 14 / 689,045 is a continuation-in-part of application Ser. No. 14 / 217,436, filed Mar. 17, 2014 and issued as U.S. Pat. No. 9,430,386 on Aug. 30, 2016, which claims the benefit of and priority to App. No. 61 / 801,422, filed Mar. 15, 2013. U.S. application Ser. Nos. 15 / 665,321 and 14 / 689,045 and 14 / 217,436 and U.S. Provisional Application 61 / 801,422 are each hereby fully incorporated by reference herein.BACKGROUNDField[0002]This invention relat...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(United States)
IPC IPC(8): G06F12/0831G06F12/0875
CPCG06F12/0833G06F2212/452G06F12/0875G06F2212/62G06F12/0897G06F12/0804G06F2212/312G06F2212/225G06F2212/262G06F2212/1016G06F12/0868G06F11/1076G06F12/0811
Inventor BRUCE, ROLANDO H.DELA CRUZ, ELMER PAULEARCEDERA, MARK IAN ALCID
Owner BITMICRO LLC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products