Method for dynamically dividing shared high-speed caches and circuit

A partition algorithm and dynamic technology, applied in the computer field, can solve problems such as system performance degradation, and achieve the effect of reducing degradation and improving performance

Inactive Publication Date: 2012-07-25
FUDAN UNIV
View PDF5 Cites 17 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the existing partitioning technology often puts too much emphasis on improving the performance of a

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for dynamically dividing shared high-speed caches and circuit
  • Method for dynamically dividing shared high-speed caches and circuit
  • Method for dynamically dividing shared high-speed caches and circuit

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0016] The present invention adopts the multi-core system simulator M-Sim to simulate the dual-core system based on the Alpha instruction set, and the specific configuration is shown in Table 1. In the dual-core shared secondary Cache system, the overall framework of applying the novel shared Cache dynamic division method proposed by the present invention is as follows: figure 1 shown. We select multiple test cases from the SPEC CPU2000 test set, and test them in pairs in pairs under the dual-core system architecture built by M-Sim, as shown in Table 2.

[0017] Table 1 Simulation environment configuration

[0018] CPU 2 cores, 8 wide, out of order, 48 LSQ, 128 ROB Level 1 Instruction Cache private, 2KB, 32B line-size, 2-way, LRU Level 1 Data Cache private, 2KB, 32B line-size, 2-way, LRU Level 2 Cache shared, 64KB, 32B line-size, 16-way Memory 300-cycle access latency

[0019] Table 2 Test Cases

[0020] serial numbe...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention belongs to the technical field of computers, and particularly discloses a method for dynamically dividing shared high-speed caches and a circuit. A monitoring circuit and a dividing circuit are set up for the shared Caches, the monitoring circuit is used for monitoring the utilization rate of each check shared Cache, the dividing circuit computes the optimal number of paths for each core distributed by the shared Cache according to monitored information, and the shared Caches are controlled by a computation result of the dividing circuit to operate. A novel division algorithm and a replacement strategy are provided, free paths are creatively added into the shared Caches, and the influence on performances of a system due to improper division is effectively suppressed. In addition, by the aid of the method, the performances of the system are improved due to correct division of the shared Caches while deterioration of the performances of the system due to mistaken division is greatly reduced. Compared with a mode without dividing shared Cache and a method for dividing shared Caches based on utility optimization, the method for dynamically dividing the shared Caches has the advantage that the performance of the system is averagely improved by 13.17% and 8.83%.

Description

technical field [0001] The invention belongs to the technical field of computers, and specifically relates to a method and circuit for dynamically dividing a shared high-speed cache (Cache). Background technique [0002] With the development of processor technology, the advantages of multi-core processors are becoming more and more obvious, which have gradually replaced single-core processors and become a new direction for the development of microprocessors. In the multi-core processor architecture, the last level of Cache is usually shared. For example, the third level of Cache is shared in the IBM POWER6 and Intel i7 architectures, and the second level of Cache is shared in the Sun UltraSPARC T2 architecture. Since the last level of Cache is shared by all cores, the active data of one core is likely to be replaced by missing data caused by other cores, resulting in system performance degradation. [0003] In order to reduce the impact of this mutual pollution on system pe...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06F12/08G06F9/50G06F12/084G06F12/12
Inventor 周晓方倪亚路
Owner FUDAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products