Unlock instant, AI-driven research and patent intelligence for your innovation.

Hierarchical Memory Addressing

Active Publication Date: 2012-03-29
NVIDIA CORP
View PDF7 Cites 37 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Benefits of technology

[0010]One advantage of embodiments of the present invention is that programs executing on a GPU cluster may efficiently access data within the unified address space. Each distinct memory circuit within each GPU associated with the GPU cluster is assigned a portion of the unified address space and is accessible from any GPU within the GPU cluster.

Problems solved by technology

While technically feasible, this technique makes inefficient use of system resources such as bandwidth and memory.
The overall process makes inefficient use of GPU resources, further reducing overall system efficiency.
Additionally, each operation for transmitting a unit of data from one GPU to another GPU typically requires explicit programming instructions to be written by a developer, in an application development process that is inefficient with respect to developer time and attention.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Hierarchical Memory Addressing
  • Hierarchical Memory Addressing
  • Hierarchical Memory Addressing

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0019]In the following description, numerous specific details are set forth to provide a more thorough understanding of the present invention. However, it will be apparent to one of skill in the art that the present invention may be practiced without one or more of these specific details. In other instances, well-known features have not been described in order to avoid obscuring the present invention.

System Overview

[0020]FIG. 1 is a block diagram illustrating a computer system 100 configured to implement one or more aspects of the present invention. Computer system 100 includes a central processing unit (CPU) 102 and a system memory 104 communicating via an interconnection path that may include a memory bridge 105. Memory bridge 105, which may be, e.g., a Northbridge chip, is connected via a bus or other communication path 106 (e.g., a HyperTransport link) to an I / O (input / output) bridge 107. I / O bridge 107, which may be, e.g., a Southbridge chip, receives user input from one or mor...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

One embodiment of the present invention sets forth a technique for addressing data in a hierarchical graphics processing unit cluster. A hierarchical address is constructed based on the location of a storage circuit where a target unit of data resides. The hierarchical address comprises a level field indicating a hierarchical level for the unit of data and a node identifier that indicates which GPU within the GPU cluster currently stores the unit of data. The hierarchical address may further comprise one or more identifiers that indicate which storage circuit in a particular hierarchical level currently stores the unit of data. The hierarchical address is constructed and interpreted based on the level field. The technique advantageously enables programs executing within the GPU cluster to efficiently access data residing in other GPUs using the hierarchical address.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS[0001]This application claims priority benefit to U.S. provisional patent application titled, “Hierarchical Memory Addressing,” filed on Sep. 24, 2010 and having Ser. No. 61 / 386,256 (Attorney Docket Number NVDA / SC-10-0046-US0). This related application is hereby incorporated herein by reference in its entirety.BACKGROUND OF THE INVENTION[0002]1. Field of the Invention[0003]Embodiments of the present invention generally relate to multiple graphics processing unit (GPU) systems and more specifically to hierarchical memory addressing.[0004]2. Description of Related Art[0005]Commercial graphics processing unit (GPU) computation systems commonly configure a cluster of multiple GPU devices to operate in concert, for example, to solve a single problem. In such systems, each GPU device typically executes instructions to solve a portion of the problem and communicates intermediate results with other GPU devices as execution progresses. A local memory ma...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F13/00G06F12/06
CPCG06F12/0284G06F12/08G06F12/0811G06F2212/253G06F2212/302G06F2213/0038G06F2212/251G06F2212/2515
Inventor DALLY, WILLIAM JAMES
Owner NVIDIA CORP