Multi-core shared final stage cache management method and device for mixed memory

A technology of last-level cache and management methods, applied in memory systems, electrical digital data processing, instruments, etc., to achieve the effect of reducing interference

Active Publication Date: 2017-06-30
SUZHOU LANGCHAO INTELLIGENT TECH CO LTD
View PDF12 Cites 20 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0008] In view of the above technical problems, the object of the present invention is to provide a hybrid main memory-oriented multi-core shared last-level cache management method and device, which comprehensively considers the differences in physical characteristics between different main memory media in the hybrid

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-core shared final stage cache management method and device for mixed memory
  • Multi-core shared final stage cache management method and device for mixed memory
  • Multi-core shared final stage cache management method and device for mixed memory

Examples

Experimental program
Comparison scheme
Effect test

Example Embodiment

[0071] Example 1

[0072] reference figure 1 , Shows a multi-core shared last-level cache management method for mixed main memory provided by the present invention. The mixed main memory includes DRAM and NVM. The last level cache is divided into multiple cache groups, and each cache group includes multiple caches. OK, the data in the hybrid main memory and the last-level cache has a multi-way group associative mapping relationship, and the management method includes the following steps:

[0073] S101: Obtain a multi-core final cache way division mode of the processor.

[0074] S102: Determine whether the access request received by the last-level cache hits the cache line of the last-level cache,

[0075] If it hits, proceed to step S103 to execute a cache line promotion policy (Promotion Policy);

[0076] If there is a miss, you need to get the data from the upper-level cache or main memory, and proceed directly to step S104 to execute the cache line insertion policy (Insertion Policy...

Example Embodiment

[0091] Example 2

[0092] reference figure 2 , Shows another multi-core shared last-level cache management method for mixed main memory provided by the present invention. The mixed main memory includes DRAM and NVM. The last level cache is divided into multiple cache groups, and each cache group includes multiple For the cache line, the data in the mixed main memory and the final cache has a multi-way group associative mapping relationship. The management method includes the following steps:

[0093] S201: Obtain a multi-core final cache way number division mode of the processor.

[0094] S202: Divide the cache lines in the last level cache (Last Level Cache, LLC) into four types: dirty NVM data (Dirty-NVM, denoted as DN), dirty DRAM data (Dirty-DRAM, denoted as DD), clean NVM data (Clean-NVM, denoted as CN) and clean DRAM data (Clean-DRAM, denoted as CD), the priorities of the four cache lines of DN, DD, CN, and CD are DNP and DDP respectively , CNP and CDP, and set the priority ...

Example Embodiment

[0124] As an implementable way,

[0125] reference image 3 , Shows a schematic diagram of the overall system architecture provided by this implementation mode. The main memory of the system is composed of DRAM and NVM, which are in the same linear address space. The on-chip cache system presents a multi-level hierarchical architecture. The larger capacity LLC is processed by two The device cores (core1 and core2) are shared. In addition, the present invention sets an AFM for each core of the processor to identify the memory access characteristics of the application on the corresponding core to obtain the hit status of the cache line corresponding to the application.

[0126] reference Figure 4 , Shows a schematic diagram of the internal structure of the AFM provided by this implementation. The time when the sum of the number of instructions run by the multiple cores of the processor reaches 100Million from zero is regarded as a counting cycle. At the beginning of each counting cy...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to the technical field of computer storage, in particular to a multi-core shared final stage cache management method and device for a mixed memory. The invention discloses a multi-core shared final stage cache management method for the mixed memory. The method comprises the following steps: obtaining a final stage cache number partition mode of a processor, and judging whether or not the access request received by the final stage cache hits a cache line of the final stage cache. The invention also discloses a multi-core shared final stage cache management device for the mixed memory. The device comprises a final stage cache number partition module and a judgment module. The multi-core shared final stage cache management method and device for the mixed memory have the advantages of synthetically considering the physical characteristics of different main memory media in a mixed memory system, optimizing the traditional LRU replacement algorithm aiming at reducing the number of deletions, reducing storage energy overhead, achieving the purpose of reducing inter-cell interference and improving the hit rate, and effectively improving the memory access performance of the final stage cache.

Description

technical field [0001] The invention relates to the technical field of computer storage, in particular to a mixed main memory-oriented multi-core shared last-level cache management method and device. Background technique [0002] As the scale of data sets processed by applications (such as search engines and machine learning) continues to expand and the number of on-chip processor cores continues to increase, SRAM / DRAM-based storage systems have gradually become the bottleneck of system energy consumption and scalability. The recent non-volatile memory NVM (Non-Volatile Memory), such as magnetoresistive random access memory (Magnetic Random Access Memory, referred to as MRAM), spin-transfer torque magnetoresistive memory (Spin-transfer-torque Magnetic Random Access Memory, referred to as STT-MRAM), Resistive Random Access Memory (ReRAM for short), and Phase-change Random Access Memory (PCM for short) are considered to be very competitive memories in the next-generation stora...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06F12/0811G06F12/126G06F12/128G06F12/0842G06F12/0897
CPCG06F12/0811G06F12/0842G06F12/0897G06F12/126G06F12/128Y02D10/00
Inventor 张德闪
Owner SUZHOU LANGCHAO INTELLIGENT TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products