Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Hardware support for superpage coalescing

a superpage coalescing and hardware technology, applied in the field of computer systems, can solve the problems of inability to efficiently co-create pages, and inability to meet the needs of users, so as to facilitate efficient co-creation and reduce latentities associated with page copying

Inactive Publication Date: 2007-03-22
ELNOZAHY ELMOOTAZBELLAH N +3
View PDF20 Cites 31 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Benefits of technology

"The present invention provides an improved method for coalescing superpages in a data processing system. The method reduces latencies associated with page copying and facilitates efficient coalescing. It also includes a hardware-supported method for migrating virtual-to-physical memory mappings in a memory hierarchy including cache levels. The invention allows for efficient use of virtual memory while reducing the impact on cache entries. Overall, the invention improves performance and efficiency in data processing."

Problems solved by technology

However, if the data in the chosen block is not modified, the block is simply abandoned and not written to the next lowest level in the hierarchy.
However, the use of superpages gives rise to a tradeoff.
While superpages can improve the TLB hit rate by reducing the number of entries that need to be concurrently maintained in the TLB, they can also lead to underutilization of the physical memory if the application does not use the entire superpage.
This approach, however, still has drawbacks.
During this time, the application continues to suffer from poor TLB behavior.
Secondly, the size of the memory controller page tables limits the availability of remapped superpages, and the mapping table can quickly grow.
These limitations are exacerbated in NUMA systems having multiple memory controllers.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Hardware support for superpage coalescing
  • Hardware support for superpage coalescing
  • Hardware support for superpage coalescing

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

)

[0029] With reference now to the figures, and in particular with reference to FIG. 3, there is depicted one embodiment 40 of a memory subsystem constructed in accordance with the present invention. Memory subsystem 40 is generally comprised of a memory controller 42 and a system or main memory array 44, and is adapted to facilitate superpage coalescing for an operating system which controls virtual-to-physical page mappings in a data processing system. The operating system (OS) may have many conventional features including appropriate software which determines page mappings, and decides when it is desirable to coalesce pages into a larger (super)page; such details are beyond the scope of the present invention but will become apparent to those skilled in the art.

[0030] Memory subsystem 40 provides a hardware solution to superpage coalescing which reduces or eliminates the poor TLB behavior that occurs during the prior art software-directed copying solution. In the present invention...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A method of assigning virtual memory to physical memory in a data processing system allocates a set of contiguous physical memory pages for a new page mapping, instructs the memory controller to move the virtual memory pages according to the new page mapping, and then allows access to the virtual memory pages using the new page mapping while the memory controller is still copying the virtual memory pages to the set of physical memory pages. The memory controller can use a mapping table which temporarily stores entries of the old and new page addresses, and releases the entries as copying for each entry is completed. The translation look aside buffer (TLB) entries in the processor cores are updated for the new page addresses prior to completion of copying of the memory pages by the memory controller. The invention can be extended to non-uniform memory array (NUMA) systems. For systems with cache memory, any cache entry which is affected by the page move can be updated by modifying its address tag according to the new page mapping. This tag modification may be limited to cache entries in a dirty coherency state. The cache can further relocate a cache entry based on a changed congruence class for any modified address tag.

Description

BACKGROUND OF THE INVENTION [0001] 1. Field of the Invention [0002] The present invention generally relates to computer systems, specifically to memory subsystems for computers, and more particularly to a method of providing efficient mappings between virtual memory and physical memory. [0003] 2. Description of the Related Art [0004] The basic structure of a conventional computer system 10 is shown in FIG. 1. Computer system 10 may have one or more processing units, two of which 12a and 12b are depicted, which are connected to various peripheral devices, including input / output (I / O) devices 14 (such as a display monitor, keyboard, and permanent storage device), memory device 16 (such as random access memory or RAM) that is used by the processing units to carry out program instructions, and firmware 18 whose primary purpose is to seek out and load an operating system from one of the peripherals (usually the permanent memory device) whenever the computer is first turned on. Processing...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(United States)
IPC IPC(8): G06F12/00G06F12/08G06F12/10
CPCG06F12/1045
Inventor ELNOZAHY, ELMOOTAZBELLAH N.PETERSON, JAMES LYLERAJAMONY, RAMAKRISHNANSHAFI, HAZIM
Owner ELNOZAHY ELMOOTAZBELLAH N
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products