Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Performance optimization method for sharing cache

A high-speed cache and optimization method technology, which is applied in the direction of memory system, memory address/allocation/relocation, instrument, etc., can solve the problems of wasting on-chip resources, occupying high reuse block space, wasting on-chip resources, etc., to improve the overall The effect of cache performance

Inactive Publication Date: 2011-01-12
SUZHOU INST FOR ADVANCED STUDY USTC
View PDF1 Cites 38 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The traditional LRU replacement strategy usually puts the most recently hit data block on the MRU (Most Recently used) bit, and eliminates the data on the LRU bit. After some data blocks are replaced into the chip, they will no longer be referenced. Class data blocks are called useless blocks, which cause useless blocks to stay in the chip for a long time, and can only be replaced when they reach the LRU bit, which will greatly waste resources in the chip. Therefore, a good shared cache management The strategy should be able to predict and eliminate useless blocks; the traditional LRU replacement strategy manages the data blocks in the slice equally, but after some data blocks are replaced into the slice, they are only referenced a few times. It is a low-reused block, which will occupy the space of a high-reused block if it stays in the chip for a long time, resulting in a waste of on-chip resources. Therefore, it is necessary to be able to filter low-reused blocks while inserting and upgrading hit data blocks;

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Performance optimization method for sharing cache
  • Performance optimization method for sharing cache
  • Performance optimization method for sharing cache

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0024] figure 1 Shown is the hardware structure of the ELF predictor of the 4-core processor. The processor has a W-way set associative shared level-2 cache, and each processor core has a private level-1 cache, and these level-1 caches are interconnected through a bus. connected to a shared secondary cache. The ELF strategy adds an independent prediction table structure between the L2 Cache and the main memory, which is used to save the usage frequency history (4-bit counter maxCstored) of the data that has not been cached and restore the usage frequency information when the data re-enters the Cache. The prediction table is organized as a 256×256 direct-mapped unmarked two-dimensional matrix structure, where the rows are indexed using the hashedPC field of the Cache block (the instruction PC that causes the cache miss is the result of every 8-bit XOR), and the columns are indexed by The block address is indexed by the XOR result of every 8 bits. This method can distinguish di...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a performance optimization method for sharing cache, which adopts a technology of useless block elimination and low reusing block filtration to manage sharing cache resource, so as to enable the cache resource to be used fully and obtain higher performance. The technology adopts an algorithm based on counting to forecast the useless block and replaces the useless block as soon as possible, adopts a dynamic insertion and improvement strategy to filter the low reusing data, so as to keep the potential activity data as far as possible and prevent part of work set from being disturbed by data with low operating frequency. The experiment shows that the invention has higher performance relative to the traditional sharing cache management policy.

Description

technical field [0001] The invention relates to the field of computer hardware, in particular to a method for optimizing the performance of a shared high-speed cache. Background technique [0002] With the increasing speed gap between processors and memory, memory system design has become one of the key factors affecting computer system performance. Currently, multi-core processors generally provide fast access to recently accessed data through a large-capacity, highly associative last-level on-chip Cache (Last-Level Cache: LLC). Theoretically speaking, in order to achieve the highest Cache hit rate, LLC should adopt the Optimal replacement algorithm (Optimal replacement algorithm, OPT). The most recently used (Least Recently Used, LRU) replacement strategy or its approximate algorithm has been widely used in commercial processors. [0003] However, a large number of research results in recent years have shown that the performance gap between LRU and OPT in high associativ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F12/08G06F12/084G06F12/123
Inventor 吴俊敏赵小雨隋秀峰尹巍唐轶轩朱小东
Owner SUZHOU INST FOR ADVANCED STUDY USTC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products