Unlock instant, AI-driven research and patent intelligence for your innovation.

System and method for reducing memory access latency using selective replication across multiple memory ports

A technology of memory access and memory port, applied in the direction of memory architecture access/allocation, memory system, memory address/allocation/relocation, etc.

Active Publication Date: 2016-06-29
MARVELL ASIA PTE LTD
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Data can be uniquely stored across various memory ports

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • System and method for reducing memory access latency using selective replication across multiple memory ports
  • System and method for reducing memory access latency using selective replication across multiple memory ports
  • System and method for reducing memory access latency using selective replication across multiple memory ports

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0026] The following is a description of example embodiments.

[0027] Before describing the exemplary embodiments of the present invention in detail, an exemplary network security processor in which the embodiments may be implemented is described immediately below to help the reader understand the inventive features of the present invention.

[0028] figure 1 is a block diagram of a network services processor 100. The network services processor 100 uses at least one processor core 120 to provide high application performance.

[0029] The network services processor 100 processes the OSI L2-L7 layer protocols encapsulated in the received data packets. As is well known to those skilled in the art, the Open Systems Interconnection (OSI) reference model defines seven network protocol layers (L1-L7). The physical layer (L1) represents the actual interface that connects a device to a transmission medium, including electrical and physical interfaces. The data link layer (L2) per...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

In one embodiment, a system includes multiple memory ports (608A-608D). The memory ports are distributed in multiple subsets, where each subset is identified by a subset index and each of the memory ports has a separate latency based on the corresponding workload. The system further includes a first address hashing unit (602B) configured to receive a read request including a virtual memory address. The virtual memory address is associated with a replication factor, and the virtual memory address refers to graphics data. The first address hashing unit converts the replication factor into a corresponding subset index based on the virtual memory address, and converts the virtual memory address into a hardware-based memory address. The hardware-based address refers to graphics data in the memory ports within a subset indicated by the corresponding subset index. The system further includes a memory copy controller (604) configured to direct read requests for the hardware-based address to the memory ports in the subset indicated by the corresponding subset index The one with the lowest solo latency.

Description

[0001] related application [0002] This application is a continuation of US Application Serial No. 13 / 280,738, filed October 25, 2011. The entire teaching of one or more of the aforementioned applications is hereby incorporated by reference. Background technique [0003] For many computer systems, memory latency is a significant barrier to accessing memory addresses. More than 90% of the time required by a computer system to perform a particular algorithmic function may be spent waiting to receive a response to a read request due to memory latencies. When an algorithm of a computer system accesses memory, the system schedules a read request to the memory, waits for the memory port to return the requested data, and then the algorithm requests the returned data. Algorithms may frequently request data from a subsequent memory address based on the returned data. Requesting the returned data and issuing a subsequent memory read request may take less time than waiting for the m...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F12/02G06F12/06
CPCG06F2212/174G06F2212/2532G06F12/0292G06F12/06G06F12/1018
Inventor J·潘伯恩G·A·鲍查德R·戈亚尔R·E·凯斯勒
Owner MARVELL ASIA PTE LTD