Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Providing hardware support for shared virtual memory between local and remote physical memory

A technology of remote memory and local memory, which is applied in the field of providing hardware support for shared virtual memory between local and remote physical memory, and can solve problems such as no management or allocation of accelerator memory

Inactive Publication Date: 2011-04-20
INTEL CORP
View PDF3 Cites 42 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

While the OS manages the physical memory available on the motherboard (system memory), it does not manage or allocate memory that is local to and available to the accelerator

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Providing hardware support for shared virtual memory between local and remote physical memory
  • Providing hardware support for shared virtual memory between local and remote physical memory
  • Providing hardware support for shared virtual memory between local and remote physical memory

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0015] Embodiments enable a processor (e.g., a central processing unit (CPU) on a socket) to create and manage memory residing on the accelerator by using special load / store transactions and addressing memory Peripheral Component Interconnect Express (PCIe TM ) interface and other interfaces and the full shared virtual address space of the accelerator interconnected with the system. The ability to directly address remote memory allows for an increase in effective computing capacity as seen by application software, and allows applications to seamlessly share data without explicitly involving a programmer to move data back and forth. In this way, memory can be addressed without resorting to memory protection and faulting virtual address accesses to redirect pending memory accesses from error handlers. Thus, existing shared-memory multi-core processing can be extended to include accelerators that are not on-socket, but connected via peripheral non-coherent links.

[0016] In co...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

In one embodiment, the present invention includes a memory management unit (MMU) having entries to store virtual address to physical address translations, where each entry includes a location indicator to indicate whether a memory location for the corresponding entry is present in a local or remote memory. In this way, a common virtual memory space can be shared between the two memories, which may be separated by one or more non-coherent links. Other embodiments are described and claimed.

Description

Background technique [0001] With the development of processor-based systems, via, for example, PCI Express-based TM Specification Basic Specification Version 2.0 (released on January 17, 2007) (hereinafter referred to as PCIe TM Specification) or a link of another such protocol Peripheral Component Interconnect Express (PCIe TM ) interconnects, etc. The availability of high-speed peripheral interconnects connected to the system's programmable accelerators allows system integrators to pack more computing power into the system. However, it is important to ensure that the application can transparently take advantage of the additional computing power without requiring significant changes to the application to manually divide the computation between the main processor (such as a multi-core central processing unit (CPU)) and the accelerator and to manage the movement of data to and from the accelerator. There are problems. Traditionally, only the main system memory managed by the...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F12/10
CPCG06F2212/254G06F12/121G06F12/1027G06F12/1036G06F15/163
Inventor G·N·钦亚H·王D·A·马泰库蒂J·D·科林斯E·舒赫曼J·P·赫尔德A·V·巴特P·塞蒂S·F·瓦利
Owner INTEL CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products