Decoupling storage controller cache read replacement from write retirement

A memory controller and high-speed cache technology, applied in memory systems, electrical digital data processing, instruments, etc., can solve problems such as wasting processor cycles

Inactive Publication Date: 2007-05-23
IBM CORP
View PDF1 Cites 29 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

It should be appreciated that jumping through so many cache tracks takes a significant amount of time and wastes processor cycles

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Decoupling storage controller cache read replacement from write retirement
  • Decoupling storage controller cache read replacement from write retirement
  • Decoupling storage controller cache read replacement from write retirement

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0017] Figure 3 is a block diagram of a data processing environment 300 in which the present invention may be implemented. Storage controller 310 receives input / output (I / O) requests from one or more hosts 302A, 302B, 302C to which storage controller 310 is attached via network 304 . I / O requests are directed to tracks in storage system 306, where storage system 306 has disk drives in any of several configurations, for example, random access storage device (DASD), redundant array of independent disks (RAID array), Simple disk cluster (JBOD), etc. Storage controller 310 includes processor 312 , cache manager 314 and cache 320 . Cache manager 314 may include hardware components or software / firmware components executed by processor 312 to manage cache 320 . Cache 320 includes a first portion and a second portion. In one embodiment, the first cache portion is a volatile storage device 322 and the second cache portion is a non-volatile storage device (NVS) 324 . The cache manag...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

In a data storage controller, accessed tracks are temporarily stored in a cache, with write data being stored in a first cache and a second cache and read data being stored in a second cache . Corresponding least recently used (LRU) lists are maintained to hold entries identifying the tracks stored in the caches. When the list holding entries for the first cache (the A list) is full, the list is scanned to identify unmodified (read) data which can be discarded from the cache to make room for new data. Prior to or during the scan, modified (write) data entries are moved to the most recently used (MRU) end of the list, allowing the scans to proceed in an efficient manner and reducing the number of times the scan has to skip over modified entries. Optionally, a status bit may be associated with each modified data entry. When the modified entry is moved to the MRU end of the A list without being requested to be read, its status bit is changed from an initial state to a second state, indicating that it is a candidate to be discarded. If the status bit is already set to the second state, then it is left unchanged. If a modified track is moved to the MRU end of the A list as a result of being requested to be read, the status bit of the corresponding A list entry is changed back to the first state, preventing the track from being discarded. Thus, write tracks are allowed to remain in the first cache only as long as necessary.

Description

technical field [0001] The present invention relates generally to data storage controllers, and more particularly to establishing cache discard and degradation policies. Background technique [0002] A data storage controller, such as the International Business Machines Corporation Enterprise Storage Server(R), receives input / output (I / O) requests directed to the attached storage system. An attached storage system may include one or more enclosures that include a large number of interconnected disk drives, such as Random Access Storage Devices (DASD), Redundant Array of Independent Disks (RAID Array), Simple Disk Cluster ( JBOD) and so on. If I / O read and write requests are received at a rate faster than they can be processed, the storage controller will queue the I / O requests in the primary cache, which may consist of one or more gigabytes volatile storage devices, such as Random Access Memory (RAM), Dynamic Random Access Memory (DRAM), and the like. A copy of some modif...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06F12/12
CPCG06F12/123G06F12/127G06F12/0866
Inventor 斯蒂文·R·鲁维达尔蒙德拉·S·莫哈宾尼·S·吉尔约瑟夫·S·海德二世
Owner IBM CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products