Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Thread-aware multi-core data prefetch self-tuning method

A data prefetching and threading technology, which is applied in the field of performance optimization of multi-core storage systems, can solve problems affecting private cache hits, achieve the effect of improving hit rate and reducing competition

Active Publication Date: 2016-01-20
ZHEJIANG UNIV
View PDF2 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Since prefetching will lead to the replacement of the cache (cache), the prefetch request of each thread may affect the hit situation of other threads' private cache (cache).

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Thread-aware multi-core data prefetch self-tuning method
  • Thread-aware multi-core data prefetch self-tuning method
  • Thread-aware multi-core data prefetch self-tuning method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0021] Embodiment 1, figure 1 with figure 2 Combined with a thread-aware multi-core data prefetch self-adjustment method; including a multi-core thread-aware multi-core data prefetch device; such as figure 1 As shown, the multi-core thread-aware multi-core data prefetching device includes multiple (at least two) processors 101 and routers 103; the processors 101 and the processors 101 are connected through an on-chip interconnection network.

[0022] Such as figure 2 As shown, each processor 101 includes a number of nodes 131 (ie Tile), a number of L1 caches (ie figure 2 a level 1 data cache 102) and a level 2 cache (i.e. figure 2 Shared secondary cache 105 in ), each node 131 has a one-to-one relationship with each primary cache, that is, each node 131 has a private primary data cache 102 independently; all nodes 131 All share the L2 cache (that is, share the L2 cache 105, which is assumed to be the last cache LLC on the chip here), and several nodes 131 (Tile), sever...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a thread-aware multi-core data prefetching self-regulation method. The method comprises a first step of performing dynamical feedback information statistics: calculating storage access behaviors and prefetching behavior information of every thread through a hardware counter; a second step of performing index calculation: calculating memory access characteristics for measuring competition degree of every thread and prefetching characteristic indexes according to results of the dynamical feedback information statistics; a third step of performing thread classification: classifying the threads according to the memory access characteristics and the prefetching characteristic indexes of every thread; a fourth step of performing prefetching adjustment: adjusting a prefetching mode and radical degree according to thread classification results; a fifth step of performing attacking prefetching filtering: filtering prefetching requests which may cause invalidity of shared data.

Description

technical field [0001] The invention relates to the field of performance optimization of a multi-core storage system, in particular to a thread-aware multi-core data prefetch self-adjustment method. Background technique [0002] Storage access latency has become one of the key bottlenecks for processor performance improvement. In order to reduce the performance loss caused by storage access delay, researchers proposed a hardware data prefetch mechanism. Prefetching refers to fetching instructions or data from off-chip memory to a cache or prefetch buffer before the processor accesses them. The analysis of a large number of application characteristics shows that the access patterns of instructions and data have strong regularity under certain circumstances, which makes it possible to predict the access address in advance and retrieve the corresponding data. Data prefetching technology has been proven to effectively improve performance on traditional single-core processors. ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F12/0806G06F12/0862
Inventor 刘鹏辛愿刘勇于绩洋黄巍
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products