System and method for managing compression and decompression of system memory in a computer system

a computer system and system memory technology, applied in the field of memory systems, can solve the problems of reducing the cost per storage bit, not being able to achieve significant improvement in the effective operation of the memory subsystem or the software which manages the memory subsystem, and the software solution typically using too many cpu compute cycles and/or adding too much bus traffic, so as to reduce data bandwidth and storage requirements, and safe use of the entire system memory space. , the effect of reducing the average compression ratio

Inactive Publication Date: 2012-06-19
MOSSMAN HLDG
View PDF22 Cites 11 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Benefits of technology

[0015]The Compressed Memory Management Unit (CMMU) may operate in conjunction with the one or more compression / decompression engines to allow a processor or I / O master to address more system memory than physically exists. The CMMU may translate system addresses received in system memory accesses into physical addresses. The CMMU may pass the resulting physical address to the system memory controller to access physical memory (system memory). In one embodiment, the CMMU may manage system memory on a page granularity. The CMMU may increase the effective size of system memory by storing the least recently used pages in a compressed format in system memory (and possibly also on the hard drive), and storing the most recently and frequently used pages uncompressed in system memory. The most recently and frequently used data may also be cached in one or more locations, such as in an L1, L2, and / or L3 cache.
[0019]The CMMU may include, but is not limited to, the following hardware components: a Page Translation Cache (PTC) and one or more scatter / gather DMA channels. In one embodiment, the CMMU may include a compression / decompression engine (CODEC). In one embodiment, the PTC may be fully associative. Software resources that the CMMU manages may include, but are not limited to: a Page Translation Table (PTT) comprising Page Translation Entries (PTEs), Uncompressed Pages (UPs), and Compressed Blocks (CBs). The PTC may include one or more recently or frequently used PTEs from the PTT, and may thus reduce the overhead of accessing a frequently or recently used PTE from the PTT stored in physical memory. In one embodiment, the unused Ups may be linked together to form an Uncompressed Page Free List (UPFL). In one embodiment, the unused CBs may be linked together to form a Compressed Block Free List (CBFL). In one embodiment, the PTEs that reference uncompressed pages may be linked together to form an Uncompressed Least Recently Used (LRU) List (ULRUL). In one embodiment, the PTEs that reference compressed blocks may be linked together to form a Compressed LRU List (CLRUL).
[0020]When a processor or I / O master generates an access to system memory, the CMMU may translate the system memory address of the access into a physical memory address. In translating the system memory address, the CMMU may perform a PTC lookup. If the PTE is already in the PTC, and if the PTE points to an uncompressed page, then the CMMU may pass the pointer to the uncompressed page from the PTE to the memory controller. The memory controller may use this pointer to directly access physical memory for the access. If the PTE is not already in the PTC, then the CMMU may read the PTE from the PTT located in physical memory. The CMMU may then write or cache the PTE to the PTC as a recently or frequently used PTE. Once the PTE is obtained, either from the PTC or read from the PTT, the PTE may be used to access the uncompressed page. In the case of a read, the uncompressed page may be readily returned to the requesting processor or I / O master.
[0025]As noted above, in an embodiment where the operating system is aware of the increased size of system memory, the kernel driver may be used to ensure that the operating system is able to safely use the entire system memory space without overflowing physical memory. In one embodiment, the kernel driver may accomplish this by ensuring that a minimum average compression ratio across the entire system memory space is maintained. In one embodiment, the CMMU may provide an Application Programming Interface (API) that enables a kernel driver to initiate various CMMU operations.
[0026]In one embodiment, one or more Compression / Decompression engines (CODECs) may be optimized to perform page-based compressions and decompressions. If a system memory page is uncompressible, then the CMMU keeps the page uncompressed. In one embodiment, a plurality of DMA-based CODECs may be included. In one embodiment, the one or more CODECs may include at least one parallel data compression and decompression engine, designed for the reduction of data bandwidth and storage requirements and for compressing / decompressing data at a high rate.

Problems solved by technology

While memory density has increased and the cost per storage bit has decreased over time, there has not been a significant improvement to the effective operation of the memory subsystem or the software which manages the memory subsystem.
However, a software solution typically uses too many CPU compute cycles and / or adds too much bus traffic to operate both compression and decompression in the present application(s).
This compute cycle problem increases as applications increase in size and complexity.
In addition, there has been no general-purpose use of compression and decompression for in-memory system data.
Thus, software compression has been used, but this technique limits CPU performance and has restricted use to certain data types.
Similar problems exist for programs that require multiple applications of software threads to operate in parallel.
Software compression does not address heavy loaded or multithreaded applications, which require high CPU throughput.
Multiple separate, serial compression and decompression engines running in parallel are cost prohibitive for general use servers, workstations, desktops, or mobile units.
However, these devices do not operate fast enough to run at memory speed and thus lack the necessary performance for in-memory data.
Such compression hardware devices are limited to serial operation at compression rates that work for slow I / O devices such as tape backup units.
The problem with such I / O compression devices, other than tape backup units, is that portions of the data to compress are often too small of a block size to effectively see the benefits of compression.
The amount of system memory available for executing processes within Prior Art computer systems is generally limited by the amount of physical memory installed in the system.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • System and method for managing compression and decompression of system memory in a computer system
  • System and method for managing compression and decompression of system memory in a computer system
  • System and method for managing compression and decompression of system memory in a computer system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

Incorporation by Reference

[0056]The following patents and patent applications are hereby incorporated by reference in their entirety as though fully and completely set forth herein.

[0057]U.S. Pat. No. 6,173,381 titled “Memory Controller Including Embedded Data Compression and Decompression Engines” issued on Jan. 9, 2001, whose inventor is Thomas A. Dye.

[0058]U.S. Pat. No. 6,170,047 titled “System and Method for Managing System Memory and / or Non-volatile Memory Using a Memory Controller with Integrated Compression and Decompression Capabilities” issued on Jan. 2, 2001, whose inventor is Thomas A. Dye.

[0059]U.S. patent application Ser. 09 / 239,659 titled “Bandwidth Reducing Memory Controller Including Scalable Embedded Parallel Data Compression and Decompression Engines” whose inventors are Thomas A. Dye, Manuel J. Alvarez II and Peter Geiger and was filed on Jan. 29, 1999. Pursuant to a Response to Office Action of Aug. 5, 2002, this application is currently pending a title change fr...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

A method and system for allowing a processor or I / O master to address more system memory than physically exists are described. A Compressed Memory Management Unit (CMMU) may keep least recently used pages compressed, and most recently and / or frequently used pages uncompressed in physical memory. The CMMU translates system addresses into physical addresses, and may manage the compression and / or decompression of data at the physical addresses as required. The CMMU may provide data to be compressed or decompressed to a compression / decompression engine. In some embodiments, the data to be compressed or decompressed may be provided to a plurality of compression / decompression engines that may be configured to operate in parallel. The CMMU may pass the resulting physical address to the system memory controller to access the physical memory. A CMMU may be integrated in a processor, a system memory controller or elsewhere within the system.

Description

PRIORITY CLAIM[0001]This application claims benefit of priority of provisional application Ser. No. 60 / 250,177 titled “System and Method for Managing Compression and Decompression of System Memory in a Computer System” filed Nov. 29, 2000, whose inventors are Thomas A. Dye, Manny Alvarez and Peter Geiger.FIELD OF THE INVENTION[0002]The present invention relates to memory systems, and more particularly to an integrated compressed memory management unit comprising a compression / decompression circuit where the unit operates to improve performance of a computing system by the storage of compressed system memory data in system memory or physical memory.DESCRIPTION OF THE RELATED ART[0003]Computer system and memory subsystem architectures have remained relatively unchanged for many years. While memory density has increased and the cost per storage bit has decreased over time, there has not been a significant improvement to the effective operation of the memory subsystem or the software wh...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(United States)
IPC IPC(8): G06F12/00G06F12/02G06F12/08G06F12/10
CPCG06F12/023G06F12/08G06F12/10G06F2212/401
Inventor GEIGER, PETERALVAREZ, II, MANUEL J.DYE, THOMAS A.
Owner MOSSMAN HLDG
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products