Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

184 results about "Cache hierarchy" patented technology

Cache hierarchy, or multi-level caches, refers to a memory architecture which uses a hierarchy of memory stores based on varying access speeds to cache data.Highly-requested data is cached in high-speed access memory stores, allowing swifter access by central processing unit (CPU) cores.. Cache hierarchy is a form and part of memory hierarchy, and can be considered a form of tiered storage.

Derivation method for cached keys in wireless communication system

A method and apparatus for providing improved security and improved roaming transition times in wireless networks. In the present invention, the same pairwise master key (PMK) from an authentication server can be used across multiple access points and a new pairwise transition key (PTK) is derived for each association of a station to any of the access points. A plurality of access points are organized in functional hierarchical levels and are operable to advertise an indicator of the PMK cache depth supported by a group of access points (N) and an ordered list of the identifiers for the derivation path. Access points in each level in the cache hierarchy compute the derived pairwise master keys (DPMKs) for devices in the next lower level in the hierarchy and then deliver the DPMKs to those devices. An access point calculates the PTK as part of the security exchange process when the station wishes to associate to the access point. The station also computes the PTK as part of the security exchange process. The station calculates all the DMPKs in the hierarchy as part of computing the PTK. The method and apparatus of the present invention allows the cache depth to vary per station, but it remains constant for a given station within a key circle.
Owner:AVAGO TECH INT SALES PTE LTD

Mechanism for high performance transfer of speculative request data between levels of cache hierarchy

A method of operating a processing unit of a computer system, by issuing an instruction having an explicit prefetch request directly from an instruction sequence unit to a prefetch unit of the processing unit. The invention applies to values that are either operand data or instructions. In a preferred embodiment, two prefetch units are used, the first prefetch unit being hardware independent and dynamically monitoring one or more active streams associated with operations carried out by a core of the processing unit, and the second prefetch unit being aware of the lower level storage subsystem and sending with the prefetch request an indication that a prefetch value is to be loaded into a lower level cache of the processing unit. The invention may advantageously associate each prefetch request with a stream ID of an associated processor stream, or a processor ID of the requesting processing unit (the latter feature is particularly useful for caches which are shared by a processing unit cluster). If another prefetch value is requested from the memory hierarchy, and it is determined that a prefetch limit of cache usage has been met by the cache, then a cache line in the cache containing one of the earlier prefetch values is allocated for receiving the other prefetch value. The prefetch limit of cache usage may be established with a maximum number of sets in a congruence class usable by the requesting processing unit. A flag in a directory of the cache may be set to indicate that the prefetch value was retrieved as the result of a prefetch operation. In the implementation wherein the cache is a multi-level cache, a second flag in the cache directory may be set to indicate that prefetch value has been sourced to an upstream cache. A cache line containing prefetch data can be automatically invalidated after a preset amount of time has passed since the prefetch value was requested.
Owner:IBM CORP

Method and apparatus for optimizing cache hit ratio in non L1 caches

A method and apparatus for increasing the performance of a computing system and increasing the hit ratio in at least one non-L1 cache. A caching assistant and a processor are embedded in a processing system. The caching assistant analyzes system activity, monitors and coordinates data requests from the processor, processors and other data accessing devices, and monitors and analyzes data accesses throughout the cache hierarchy. The caching assistant is provided with a dedicated cache for storing fetched and prefetched data. The caching assistant improves the performance of the computing system by anticipating which data is likely to be requested for processing next, accessing and storing that data in an appropriate non-L1 cache prior to the data being requested by processors or data accessing devices. A method for increasing the processor performance includes analyzing system activity and optimizing a hit ratio in at least one non-L1 cache. The caching assistant performs processor data requests by accessing caches and monitoring the data requests to determine knowledge of the program code currently being processed and to determine if patterns of data accession exist. Based upon the knowledge gained through monitoring data accession, the caching assistant anticipates future data requests.
Owner:IBM CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products