Multi-core shared final stage cache management method and device for mixed memory
A technology of last-level cache and management methods, applied in memory systems, electrical digital data processing, instruments, etc., to achieve the effect of reducing interference
- Summary
- Abstract
- Description
- Claims
- Application Information
AI Technical Summary
Problems solved by technology
Method used
Image
Examples
Example Embodiment
[0071] Example 1
[0072] reference figure 1 , Shows a multi-core shared last-level cache management method for mixed main memory provided by the present invention. The mixed main memory includes DRAM and NVM. The last level cache is divided into multiple cache groups, and each cache group includes multiple caches. OK, the data in the hybrid main memory and the last-level cache has a multi-way group associative mapping relationship, and the management method includes the following steps:
[0073] S101: Obtain a multi-core final cache way division mode of the processor.
[0074] S102: Determine whether the access request received by the last-level cache hits the cache line of the last-level cache,
[0075] If it hits, proceed to step S103 to execute a cache line promotion policy (Promotion Policy);
[0076] If there is a miss, you need to get the data from the upper-level cache or main memory, and proceed directly to step S104 to execute the cache line insertion policy (Insertion Policy...
Example Embodiment
[0091] Example 2
[0092] reference figure 2 , Shows another multi-core shared last-level cache management method for mixed main memory provided by the present invention. The mixed main memory includes DRAM and NVM. The last level cache is divided into multiple cache groups, and each cache group includes multiple For the cache line, the data in the mixed main memory and the final cache has a multi-way group associative mapping relationship. The management method includes the following steps:
[0093] S201: Obtain a multi-core final cache way number division mode of the processor.
[0094] S202: Divide the cache lines in the last level cache (Last Level Cache, LLC) into four types: dirty NVM data (Dirty-NVM, denoted as DN), dirty DRAM data (Dirty-DRAM, denoted as DD), clean NVM data (Clean-NVM, denoted as CN) and clean DRAM data (Clean-DRAM, denoted as CD), the priorities of the four cache lines of DN, DD, CN, and CD are DNP and DDP respectively , CNP and CDP, and set the priority ...
Example Embodiment
[0124] As an implementable way,
[0125] reference image 3 , Shows a schematic diagram of the overall system architecture provided by this implementation mode. The main memory of the system is composed of DRAM and NVM, which are in the same linear address space. The on-chip cache system presents a multi-level hierarchical architecture. The larger capacity LLC is processed by two The device cores (core1 and core2) are shared. In addition, the present invention sets an AFM for each core of the processor to identify the memory access characteristics of the application on the corresponding core to obtain the hit status of the cache line corresponding to the application.
[0126] reference Figure 4 , Shows a schematic diagram of the internal structure of the AFM provided by this implementation. The time when the sum of the number of instructions run by the multiple cores of the processor reaches 100Million from zero is regarded as a counting cycle. At the beginning of each counting cy...
PUM
Abstract
Description
Claims
Application Information
- R&D Engineer
- R&D Manager
- IP Professional
- Industry Leading Data Capabilities
- Powerful AI technology
- Patent DNA Extraction
Browse by: Latest US Patents, China's latest patents, Technical Efficacy Thesaurus, Application Domain, Technology Topic.
© 2024 PatSnap. All rights reserved.Legal|Privacy policy|Modern Slavery Act Transparency Statement|Sitemap