Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Automatic READ latency calculation without software intervention for a source-synchronous interface

Inactive Publication Date: 2005-05-24
NORTH STAR INNOVATIONS
View PDF10 Cites 39 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, interfaces for which synchronous protocols are used are limited by a physical delay between communicating devices.
System design requires a uniform clock among the various devices, mandating that clock wires be routed across the interface, increasing complexity of design.
Source-synchronous data transfers between devices in different timing domains can be complicated by latency, complexity, and a lack of repeatability.
This variance hampers debugging of a processor where cycle reproducibility is required.
The difficulty of debugging is further compounded when two processors with minor manufacturing differences are not comparable on a cycle-to-cycle basis.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Automatic READ latency calculation without software intervention for a source-synchronous interface
  • Automatic READ latency calculation without software intervention for a source-synchronous interface
  • Automatic READ latency calculation without software intervention for a source-synchronous interface

Examples

Experimental program
Comparison scheme
Effect test

first method embodiment

[0044]FIG. 3 is a flowchart depicting a method for receiving READ data reproducibly on an interface with a variable recurring read latency, in accordance with a first method embodiment of the present invention. The method may be applicable in fully pipelined memory interfaces, allowing multiple independent READ commands to be pending, and multiple data values to be stored in a data FIFO.

[0045]At step 302, a first shift register is reset to an initialized state, and a first shift register is programmed to shift in response to each clock cycle of a timer. Step 302 may also be performed whenever a clock frequency of the first timing domain is changed. At step 304, a clock cycle is detected. At step 306, a determination is made as to whether a READ command is needed. If a READ command is not needed, then at step 308, a “zero” is provided as an input to the shift register. If a READ command is needed, then at step 310, a READ command is generated (in a first timing domain), and at step 3...

second method embodiment

[0055]FIG. 4 is a flowchart depicting a method for receiving READ data reproducibly on an interface with a variable recurring read latency, in accordance with a second method embodiment of the present invention. The method may be applicable in fully pipelined memory interfaces, allowing multiple independent READ commands to be pending, and multiple data values to be stored in a data FIFO. The method of FIG. 4 includes a synthesized READ, also known as a “Dummy” READ. The synthesized READ is not intended to provide useful data, but merely to cause an off-chip memory device to provide a data valid signal.

[0056]At a step 402, a first shift register is reset to an initialized state, and a first shift register is programmed to shift in response to each clock cycle of a timer. Step 402 may also be performed whenever a clock frequency of the first timing domain is changed. At step 404, a synthesized READ command is generated (in a first timing domain), and at step 406, a “one” is provided ...

third method embodiment

[0066]FIG. 5 is a flowchart depicting a method for receiving READ data reproducibly on an interface with a variable recurring read latency, in accordance with a third method embodiment of the present invention. The method may be applicable in fully pipelined memory interfaces, allowing multiple independent READ commands to be pending, and multiple data values to be stored in a data FIFO. Like the method of FIG. 4, the method of FIG. 5 includes a synthesized READ, also known as a “Dummy” READ. However, the method of FIG. 5 also includes a synthesized WRITE. The synthesized READ is intended to provide the data that is written in response to the synthesized WRITE. Moreover, the step of comparing the write pointer value with the read pointer value of the method of FIG. 4 is replaced with a step of comparing the data itself in the data FIFO (returned from the off-chip memory device) with the synthesized data of the synthesized WRITE.

[0067]At a step 502, a first shift register is reset to...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

In response to a clock cycle and a pending READ command for data with a variably recurring access latency, a clock cycle count is adjusted. If a latency value has not been locked and if the READ command is a first READ command, the clock cycle count is stored as a locked latency value upon receiving a synchronized data available event (DQS for instance). Each subsequent READ command has an associated clock cycle count to enable pipelining wherein the clock cycle count for each READ starts incrementing when the individual READ command is issued. For subsequent READ commands, if the cycle count compares favorably with the locked latency value, data can be sampled safely from the interface at the identical latency for every READ request issued. The locked latency value can be read and / or written by software / hardware such that the read latency is consistent across multiple devices for reproducibility during debug.

Description

FIELD OF THE INVENTION[0001]The present invention relates to high-speed memory devices, and more particularly to read latency calculation in a high-speed memory device with variable recurring latency.BACKGROUND OF THE INVENTION[0002]The rapid increase in processor speed has necessitated a commensurate increase in memory access speed of off-chip caches or memory to prevent memory accesses from becoming a bottleneck. Traditionally, access to off-chip memory devices has been in accordance with a synchronous protocol. Synchronous protocols, in which off-chip accesses have a guaranteed bounded recurring latency relationship, have been easy to implement and are well defined. Synchronous protocols generally have been implemented by a clock that distributes a clock signal to an on-chip controller and to the off-chip caches or memory. Accesses are initialized and terminated only at transitions in value of the clock signal.[0003]However, interfaces for which synchronous protocols are used are...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F12/00G06F12/14G06F13/42
CPCG06F13/4217
Inventor WELKER, JAMES A.AUDITYAN, SRINATHNUNEZ, JOSE M.PODNAR, ROBERT C.
Owner NORTH STAR INNOVATIONS
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products