Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

281 results about "Memory hierarchy" patented technology

In computer architecture, the memory hierarchy separates computer storage into a hierarchy based on response time. Since response time, complexity, and capacity are related, the levels may also be distinguished by their performance and controlling technologies. Memory hierarchy affects performance in computer architectural design, algorithm predictions, and lower level programming constructs involving locality of reference.

Method and apparatus for prefetching recursive data structures

Computer systems are typically designed with multiple levels of memory hierarchy. Prefetching has been employed to overcome the latency of fetching data or instructions from or to memory. Prefetching works well for data structures with regular memory access patterns, but less so for data structures such as trees, hash tables, and other structures in which the datum that will be used is not known a priori. The present invention significantly increases the cache hit rates of many important data structure traversals, and thereby the potential throughput of the computer system and application in which it is employed. The invention is applicable to those data structure accesses in which the traversal path is dynamically determined. The invention does this by aggregating traversal requests and then pipelining the traversal of aggregated requests on the data structure. Once enough traversal requests have been accumulated so that most of the memory latency can be hidden by prefetching the accumulated requests, the data structure is traversed by performing software pipelining on some or all of the accumulated requests. As requests are completed and retired from the set of requests that are being traversed, additional accumulated requests are added to that set. This process is repeated until either an upper threshold of processed requests or a lower threshold of residual accumulated requests has been reached. At that point, the traversal results may be processed.
Owner:DIGITAL CACHE LLC +1

Method for prefetching recursive data structure traversals

Computer systems are typically designed with multiple levels of memory hierarchy. Prefetching has been employed to overcome the latency of fetching data or instructions from or to memory. Prefetching works well for data structures with regular memory access patterns, but less so for data structures such as trees, hash tables, and other structures in which the datum that will be used is not known a priori. In modern transaction processing systems, database servers, operating systems, and other commercial and engineering applications, information is frequently organized in trees, graphs, and linked lists. Lack of spatial locality results in a high probability that a miss will be incurred at each cache in the memory hierarchy. Each cache miss causes the processor to stall while the referenced value is fetched from lower levels of the memory hierarchy. Because this is likely to be the case for a significant fraction of the nodes traversed in the data structure, processor utilization suffers. The inability to compute the address of the next address to be referenced makes prefetching difficult in such applications. The invention allows compilers and/or programmers to restructure data structures and traversals so that pointers are dereferenced in a pipelined manner, thereby making it possible to schedule prefetch operations in a consistent fashion. The present invention significantly increases the cache hit rates of many important data structure traversals, and thereby the potential throughput of the computer system and application in which it is employed. For data structure traversals in which the traversal path may be predetermined, a transformation is performed on the data structure that permits references to nodes that will be traversed in the future be computed sufficiently far in advance to prefetch the data into cache.
Owner:DIGITAL CACHE LLC

Computer system including plural caches and utilizing access history or patterns to determine data ownership for efficient handling of software locks

A system and method for enabling a multiprocessor system employing a memory hierarchy to identify data units or locations being used as software locks. The memory hierarchy comprises a main memory having a plurality of data units, a plurality of caches that operate independently of each other, and at least one coherent domain interfaced to each cache. Each coherent domain comprises at least two processors. The main memory maintains coherency of data among the plurality of caches using a directory that maintains information about each data line. The system of the present invention allows a requesting agent, such as a processor or cache, to request a data unit without specifying the type of ownership, where ownership may be exclusive or shared. The directory includes history information that defines the previous access pattern of the requested data unit. Prior to forwarding the requested data unit to the requesting agent, the main memory checks, using a conditional fetch command, the history information to determine what type of ownership to associate with the requested data unit. The requested data unit is then delivered to the requesting agent with ownership rights specified by the history information. The processors may utilize a directory-based protocol such as MESI (modified, exclusive, shared, invalid) to maintain coherence among the processors, with each processor snooping a shared bus to track the status of caches lines in the other processors.
Owner:UNISYS CORP

Avoiding inconsistencies between multiple translators in an object-addressed memory hierarchy

One embodiment of the present invention provides a system that avoids inconsistencies between multiple translators in an object-addressed memory hierarchy. This object-addressed memory hierarchy includes an object cache, which supports references to object cache lines based on object identifiers instead of physical addresses. During operation, the system receives a read-to-share (RTS) signal for an object cache line, wherein the RTS signal is received from a requesting processor as part of a cache-coherence operation. If no processor owns the object cache line, the system causes the requesting processor to become the owner of the object cache line instead of merely holding a copy the object cache line in the shared state. The system also generates a translation for the object cache line in a translator associated with the requesting processor, wherein the translation maps an object identifier and a corresponding offset to a physical address for the object cache line and reconstructs the contents of the object cache line by reading from memory at that physical address. In this way, if the requesting processor owns the object cache line, a subsequent processor that requests the same object cache line will receive the object cache line from the requesting processor, and will not generate an additional translation for the object cache line. This ensures that multiple translators will not generate inconsistent translations for the same object cache line.
Owner:ORACLE INT CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products