Unlock instant, AI-driven research and patent intelligence for your innovation.

Apparatus and method for dynamic cache management

A cache and device technology, applied in memory systems, electrical digital data processing, memory address/allocation/relocation, etc., can solve problems such as unnecessary data transmission, extra, cache mismatch, etc.

Inactive Publication Date: 2009-01-07
NXP BV
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, sometimes if the average FIFO cache requirement is greater than what a single cache can handle, this will cause a cache mismatch
This mismatch between the actual cache size and the desired cache size will result in the sacrifice of other memory blocks within the cache in favor of using those memory blocks for a particular FIFO
[0010] For example, in some cases, memory blocks that will be needed immediately may be incorrectly selected for sacrifice, resulting in additional, unnecessary data transfers
Another possibility is to determine that modules that will not be used in the near future and are therefore suitable candidates for sacrifice will not be sacrificed

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Apparatus and method for dynamic cache management

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0016] While the invention is suitable for various modifications and alternative forms, in the drawings, specific forms thereof have been shown by way of example, and a detailed description will be given. It should be understood, however, that the intention is not to limit the invention to the particular embodiments described. On the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the invention as defined by the appended claims.

[0017] figure 1 Cache 100 including EPT counter 102 and ECT counter 104 is shown. Cache 100 includes five FIFOs occupying a portion of cache 100 . Each FIFO handles data. According to one embodiment of the present invention, cache memory 100 may be a single level of memory. According to another embodiment, cache memory 100 has multiple levels. Another aspect of the invention includes cache memory 100, which is shared between multiple processors, or by a single processor w...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The apparatus of the present invention improves performance of computing systems by enabling a multi-core or multi-processor system to deterministically identify cache memory (100) blocks that are ripe for victimization and also prevent victimization of memory blocks that will be needed in the immediate future. To achieve these goals, the system has a FIFO with schedule information available in the form of Estimated Production Time (EPT) (102) and Estimated Consumption Time (ECT) (104) counters to make suitable pre-fetch and write-back decisions so that data transmission is overlapped with processor execution.

Description

technical field [0001] The present invention relates to data processing systems, and more particularly to multiprocessor systems with optimized cache management. Background technique [0002] Advances in computer hardware and software technology have produced multiprocessor computer systems that can perform highly complex parallel processing by logically partitioning system resources into different tasks. Processors may reside on one or more processor modules, typically with at least a second level cache. [0003] In general, accessing cache memory is faster than accessing main memory. The cache is usually located on the processor module, or within the process. A cache acts as a buffer that holds recently used instructions and data, thereby reducing the latency associated with fetching instructions and data from main memory each time they are needed. [0004] Some caches hold the most frequently used memory lines from main memory. A memory line is the smallest readable u...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F12/12
CPCG06F12/126G06F12/0893
Inventor 米林德·库尔卡尼纳伦德拉纳斯·乌杜帕
Owner NXP BV