Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A high-performance heterogeneous multi-core shared cache buffer management method

A heterogeneous multi-core and buffer management technology, applied in the field of heterogeneous multi-core shared cache buffer management, can solve the problems of unfair allocation of shared last-level cache, affecting system performance and power consumption, inability to adapt to heterogeneous environments, etc. The effect of fair competition, improved utilization, and improved memory hit rate

Active Publication Date: 2020-05-08
BEIJING UNIV OF TECH
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the existing cache management work is mainly used in the homogeneous multi-core system environment, which cannot adapt to the heterogeneous environment where the CPU and GPU are combined, let alone distinguish the requests from the CPU and the requests from the GPU, resulting in a shared last-level cache The unfairness of allocation seriously affects the performance and power consumption of the system

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A high-performance heterogeneous multi-core shared cache buffer management method
  • A high-performance heterogeneous multi-core shared cache buffer management method
  • A high-performance heterogeneous multi-core shared cache buffer management method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0028] In order to make the objectives, technical solutions and advantages of the present invention clearer, the embodiments of the present invention will be described in detail below with reference to the accompanying drawings.

[0029] The invention relates to a high-performance heterogeneous multi-core cache buffer management method, such as figure 1 As shown, take a heterogeneous processor with two CPU cores and four GPU cores, each core having its own L1Cache and sharing one L2Cache as an example. The running CPU test program is single-threaded SPEC CPU2006, and the GPU application program is Rodinia. Each workload consists of a CPU test program and a GPU application program. In the simulator, the SLICC (specification language for implementing cache coherence) script language is used to describe the conformance protocol. Such as figure 2 Shown is a diagram of the SLICC operating mechanism. Specific steps are as follows:

[0030] Step 1. Distinguish the CPU memory access re...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a heterogeneous multi-core sharing cache buffer management method for high performance. According to the method, first, a buffer with the same structure as a sharing final L2-level cache (LLC) is established at a GPU side, and a GPU message accesses the buffer first, so that the purpose of filtering a GPU flow request is achieved, and LLC space is emptied for a CPU application program. On the basis of adding the buffer, a reasonable replacement policy is adopted according to different characteristics of the CPU application program and a GPU application program to increase the cache hit rate. Finally, the size of the buffer is adjusted, the size of the buffer is changed before operation according to IPC division indicators, an optimal performance scheme is found, and therefore the purpose of improving system performance is achieved.

Description

Technical field [0001] The invention belongs to the field of computer system cache system structure, and specifically relates to a high-performance heterogeneous multi-core shared cache buffer management method. Background technique [0002] With the advancement of semiconductor technology and insurmountable obstacles such as physical limits and power consumption encountered by single processors, profound changes have taken place in architecture technology. After continuous research and development in recent years, the advanced architecture represented by multi-core processors has gradually replaced single-core processors as the main way to improve processor performance. A multi-core processor integrates multiple processor cores in a chip. These processor cores have the same or different functions and structures. They are integrated in the same chip in an effective way, and applications are distributed in an effective way. Parallel processing is performed on different microproce...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F12/0842G06F9/48G06F9/50
CPCG06F9/4812G06F9/5044G06F12/0842
Inventor 方娟张希蓓陈欢欢刘士建
Owner BEIJING UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products