Distributed memory management method based on network and page granularity management

A technology of memory management and page granularity, applied in memory address/allocation/relocation, data processing input/output process, memory system, etc., can solve the problems that the platform cannot support performance and cannot handle data-intensive applications, etc., to achieve Excellent performance, low latency, and easy-to-use effects

Active Publication Date: 2020-06-12
EAST CHINA NORMAL UNIV
View PDF3 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

But caching the hottest data items can only adapt to unbalanced scenarios in KVSs, but cannot handle other data-intensive applications
Cache data blocks that are frequently accessed in data-intensive worklo

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Distributed memory management method based on network and page granularity management
  • Distributed memory management method based on network and page granularity management
  • Distributed memory management method based on network and page granularity management

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0026] The present invention connects by the PDMM interface of following table 1:

[0027] Table 1: PDMM interface

[0028]

[0029] After the external interface provided by PDMM is connected, the Malloc and Free functions allow the application to create or release a piece of memory from the GPM, and its distributed memory management is carried out according to the following steps:

[0030] (1) Assignment request

[0031] See attached figure 1 , the node processes an allocation request (see attached figure 1 Line 2 of the middle code block), first try to allocate a memory space of just the right size in the local memory according to the given size parameter, if the requested memory size exceeds the remaining memory space of the current node, then the node will send data to the cluster based on the metadata Another node forwards the allocation request.

[0032] (2) Memory access

[0033] See attached figure 2 , the data accessed on the GPM should be extracted as a pag...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a distributed memory management method based on network and page granularity management. The method is characterized in that a node, page and block memory management method isadopted for a global address; partial memories from different nodes are packaged in a global page-based memory (GPM), and data transmission under page granularity is supported, memory management is performed based on the updated memory consistency protocol, so that the data on the local cache page is consistent with the data on the GPM, and a high-level application program deployed in the PDMM transparently accesses the GPM, thereby realizing low-delay and high-throughput inter-node access. Compared with the prior art, the inter-node access delay with low delay and high throughput is achieved,the method is simple and convenient to use, the problem that cache is invalid due to write operation in data intensive work is effectively solved, and the performance of the PDMM is superior to thatof other products of the same type.

Description

technical field [0001] The invention relates to the technical field of distributed memory management, in particular to a distributed memory management method based on network and page granularity. Background technique [0002] As the performance expansion of a single computer server slowed down, people began to create a distributed memory management platform similar to NUMA architecture across servers through low-latency remote access primitives that support RDMA networks. Under a pure NUMA architecture, these platforms (such as FaRM, Rack-Out and GAM) provide object-level granularity for memory management and RDMA operations. Compared with local memory access, the latency of InfiniBand QDR and RoCE deployed on GAM and FaRM in turn is almost 25 times and 100 times that of local memory access. Therefore, the access delay between nodes on the NUMA architecture in the distributed memory management platform will significantly reduce the execution speed of the application, espec...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06F3/06G06F12/02G06F12/0882
CPCG06F3/0611G06F3/0613G06F3/064G06F3/0679G06F12/0238G06F12/0882Y02D10/00
Inventor 胡卉芪朱明清
Owner EAST CHINA NORMAL UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products