Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Cache management method and device

A cache management, unified technology

Active Publication Date: 2014-11-19
HUAWEI TECH CO LTD
View PDF5 Cites 14 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In this way, when accessing memory across NUMA nodes, it needs to be transferred through QPI, and the speed is slow, that is, the processor in the NUMA system accesses its local memory much faster than the speed of accessing non-local memory.
[0004] Therefore, when the existing NUMA system accesses non-local memory across NUMA nodes, the data access speed decreases and the system performance decreases

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Cache management method and device
  • Cache management method and device
  • Cache management method and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0072] An embodiment of the present invention provides a cache management method, which is applied to a device with a NUMA architecture, the device includes at least two central processing units CPU, the device includes a data buffer, and the data buffer is divided into at least two Local memory area, each CPU corresponds to a local memory area, for each CPU. Such as figure 2 Said, said method comprises the following steps:

[0073] 101. Record the frequency at which each memory page in the local memory area corresponding to the local CPU is accessed by each CPU.

[0074] For each CPU in the server (cache management device), corresponding to the method provided in the embodiment of the present invention, when a certain CPU is used as an execution subject, the CPU is called a local CPU. Each CPU will record the frequency of memory pages in its local memory area being accessed by each CPU, which is convenient for each CPU to detect the frequency of access to each memory page,...

Embodiment 2

[0118] An embodiment of the present invention provides a cache management method, which is applied to a non-uniform memory access architecture NUMA device, the NUMA device includes at least two central processing unit CPUs and a data buffer, the NUMA device includes a data buffer, and the NUMA device includes a data buffer. The data buffer contains at least one local memory area, and each CPU corresponds to a local memory area. Such as image 3 Said, for each CPU, the execution subject is a local CPU, and the method includes the following steps:

[0119] 201. Record the frequency at which each memory page in the local memory area corresponding to the local CPU is accessed by each CPU.

[0120] It should be noted that when the memory management method provided by the present invention is applied to a CPU, the CPU will be called a local CPU. That is, when the CPU is the execution subject, the CPU is called a local CPU.

[0121] Refer to the above combination figure 1 It can ...

Embodiment 3

[0147] The embodiment of the present invention provides a cache management method, such as Figure 4 As shown, for each CPU in the server, when a certain CPU is used as the execution subject, the CPU is called a local CPU, and the method includes the following steps:

[0148] 301. The local CPU uses a local index and a global index to record the correspondence between the storage address of each memory page in the local memory area and the number of each memory page.

[0149] Wherein, the local CPU uses the local index to record the storage address of each memory page in the local memory area in the local memory area, and the corresponding relationship between the number of each memory page; uses the global index to record the storage address of each memory page in the local memory area; The corresponding relationship between storage addresses in cache areas other than the local memory area of ​​the local CPU and the numbers of the memory pages.

[0150] In implementation, th...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The embodiment of the invention provides a cache management method and device and relates to the technical field of computer application. An internal storage page is transferred reasonably, so that data are concentrated at the local internal storage zones of CPUs as much as possible, data accessing speed lowering caused by frequent remote internal storage accessing is avoided, and data accessing speed is improved. The method comprises the steps that the accessing frequency from each CPU to each internal storage page in the local internal storage zone corresponding to a local CPU is recorded; whether an internal storage page exists in each internal storage page is detected, and if yes, a CPU with the maximum accessing frequency of the internal storage page to be accessed is determined as a target CPU; and the internal storage page to be accessed is transferred to the local internal storage zone of the target CPU.

Description

technical field [0001] The present invention relates to the field of computer application technology, in particular to a buffer management method and a buffer management device. Background technique [0002] With the rapid development of computer application technology, 2, 4 or even 8 CPUs will be configured for the server, and then NUMA (Non Uniform Memory Access Architecture, non-uniform memory access architecture) will be produced. Under the NUMA architecture, the processor will divide a large memory address space as a data buffer, divide this data buffer into multiple memory areas, and each CPU corresponds to a memory area. The NUMA architecture can reduce the link delay between CPUs, and has the characteristics of good scalability and easy programming. [0003] In the NUMA architecture, each CPU and its corresponding memory area are divided into a NUMA node, and high-speed communication links are implemented between every two NUMA nodes through QPI (Quick Path Intercon...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F12/08G06F13/16G06F3/06G06F12/1045
Inventor 周烜朱阅岸程广卫
Owner HUAWEI TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products