Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Memory allocation method and delay perception-memory allocation apparatus suitable for memory access delay balance among multiple nodes in NUMA construction

A technology of memory allocation and memory access, applied in the direction of multi-program device, resource allocation, program control design, etc., can solve the problems of unfairness of shared memory resources, increase of process application performance difference, overall application performance fluctuation, etc.

Inactive Publication Date: 2016-03-09
凯习(北京)信息科技有限公司
View PDF3 Cites 15 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The NUMA multi-core architecture alleviates the contention of multiple cores for the same memory controller (IMC), but the unbalanced memory access delay between multiple memory nodes leads to the imbalance of shared memory resources between multiple running application processes (process). Fairness, which in turn increases the difference in process application performance (process performance), resulting in fluctuations in overall application performance (application performance)
The memory allocation method of the Linux operating system only considers the memory capacity (memory capacity) that can be allocated by memory nodes when allocating memory, which will cause the imbalance of memory access latency (memory access latency) between memory nodes

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Memory allocation method and delay perception-memory allocation apparatus suitable for memory access delay balance among multiple nodes in NUMA construction
  • Memory allocation method and delay perception-memory allocation apparatus suitable for memory access delay balance among multiple nodes in NUMA construction
  • Memory allocation method and delay perception-memory allocation apparatus suitable for memory access delay balance among multiple nodes in NUMA construction

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0110] The embodiment adopts a NUMA architecture with two memory nodes, and uses the memory allocation method for memory access delay balance between multiple nodes and the delay-aware-memory allocation device of the present invention to perform a delay-aware balanced memory allocation test.

[0111] Experimental conditions: a server with two Intel E5620 processors and two memory nodes, the server is an IBM blade server, using the RedHat CentOS6.5 operating system, the kernel version is linux-2.6.32. After starting the server, configure hyperthreading and prefetching to disable (translation, not enabled).

[0112] Test process (1): Under the running scenario of multiple parallel instances of a single application, the non-delay-perceived memory allocation process is tested and compared with the memory allocation process in the memory access delay-perceived balance state of the present invention. The number of processes running in parallel is 1 to 8, and the comparison of perfor...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention discloses a memory allocation method and delay perception-memory allocation apparatus suitable for memory access delay balance among multiple nodes in an NUMA construction. The apparatus comprises a delay perception unit (1) embedded inside a GQ unit of the NUMA multi-core construction, and a memory allocation unit (2) embedded inside a Linux operating system. According to the memory allocation method disclosed by the present invention, memory access delay among nodes in memory can be perceived periodicallyby the delay perception unit (1), whether memory access delay among nodes in memory is balanced can be obtained by the memory allocation unit (2), and a memory allocation node is selected according to a balance state, and is finally output to the Buddy memory allocator of the Linux operating system, thereby realizing physical memory allocation. According to the apparatus disclosed by the present invention aiming at an NUMA multi-core construction server, in the premise of ensuring memory access delay balance, application performance is stabilized, and unfairness of shared memory among application processes is reduced.

Description

technical field [0001] The present invention relates to a memory allocation method for a NUMA architecture server, more particularly, a memory allocation method suitable for memory access delay balance of a NUMA architecture server of a Linux operating system. Background technique [0002] With the popularization and development of multi-core architecture, NUMA (NonUniform Memory Access Architecture, non-uniform memory access architecture) multi-core architecture is widely adopted by major data centers and scientific computing clusters due to its advantages of low latency of local memory access. However, the complex structure of NUMA makes memory management more complicated for the operating system. Although the Linux operating system can make full use of the low-latency local memory access characteristics of the NUMA multi-core architecture, the problem of memory access delay balance among multiple memory nodes has not been resolved. How to effectively manage the use of me...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F9/50
CPCG06F9/5016
Inventor 杨海龙李慧娟王辉刘岚栾钟治钱德沛
Owner 凯习(北京)信息科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products