Memory allocation method and device in user mode protocol stack

A memory allocation and protocol stack technology, applied in resource allocation, multi-program device, program control design, etc., can solve problems such as increasing CPU consumption

Pending Publication Date: 2021-07-16
ALIBABA GRP HLDG LTD
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

After the data is copied, the application layer can release the memory or reuse it. The subsequent memory release is maintained by the protocol stack. The memory space can be used flexibly, but the CPU consumption will be increased.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Memory allocation method and device in user mode protocol stack
  • Memory allocation method and device in user mode protocol stack
  • Memory allocation method and device in user mode protocol stack

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0076] combine image 3 Describe the memory allocation process of this embodiment:

[0077] In the embodiment of the present invention, a large and continuous space is used as a memory allocation pool (menpool), and the space is divided into multiple fixed-length and aligned segments, and the sizes can be designed as 4K, 8K, 16K, 32K, 64K, 128K etc., the specific size depends on the application scenario. like image 3 The working flow chart of the memory allocator in the example of the present invention is given. Each memory segment reserves fixed-length bytes in its header to store the corresponding context information, including memory size, reference count, and memory first address. The allocation principle of the memory allocator: Since the fragments in the memory allocation pool are prepared in advance, zbuf of any size is allocated, and the fragment with the most suitable size is selected first. If the fragment of the current length has been allocated, use the followi...

Embodiment 2

[0079] combine image 3 Describe the memory allocation process of this embodiment:

[0080] like image 3 In the memory allocation pool (menpool), multiple 1K, 2K, and 4K memory segments are pre-allocated. The application layer allocates 1K, 2K, and 4K zbuf from the memory allocation pool, and initializes the reference count of zbuf to 1. The application layer Fill data in the memory pointed to by the data field of zbuf, and finally call the write interface to pass the address of data to the protocol stack. Note that the address of zbuf is not passed here. Different from the writing interface of the traditional protocol stack, the copy mode is used to copy from the application layer to the memory space of the protocol stack. The zbuf allocated in the example of the present invention transfers the application layer data space to the user mode protocol stack through DMA mode, and the protocol stack uses additional The memory holds the address and length of the data space, i.e....

Embodiment 3

[0082] Figure 4 The middle is the scenario where the application layer requests to allocate a zbuf, retries 2 times in a row, and writes the zbuf to the protocol stack three times. After the first zbuf is written to the protocol stack, the protocol stack has sent the first half of the data, and the remaining half is in the pbuf queue. When the application layer retries, due to the window limitation of the protocol stack, the first half is written first, the protocol stack uses the new pbuf to save the memory space, and the reference count for zbuf is incremented by one; If the space is continuous, the data length of the directly updated pbuf is the sum of the two, without updating the reference count; when retrying for the second time, use the new pbuf to save, and the reference count is incremented by one. The protocol stack sends the data of the pbuf queue sequentially, and when it receives the response packet of each pbuf, the reference count is decremented by one in turn...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a memory allocation method and device in a user mode protocol stack, and the method comprises the steps: determining one or more memory fragments from a memory allocation pool to store a message according to the data size of the message when the message is received, and storing the context information corresponding to the message in the memory fragments; distributing a zero-copy memory zbuf in an application layer, and filling data in the zbuf memory through the application layer; transmitting a memory address of a memory fragment storing the message to a protocol stack in a DMA mode; and enabling the protocol stack to store the memory address and length information of the message by using a central processing unit Pbuf of the computer. And high-performance zero-copy memory allocation in the user mode protocol stack is realized.

Description

technical field [0001] The invention relates to the technical field of data transmission management, in particular to a memory allocation method and device in a user mode protocol stack. Background technique [0002] In a distributed system, point-to-point machines need to transmit data through network communication. However, with the increasing bandwidth of network card devices and multi-core CPU performance, the ability of the kernel-mode TCP / IP protocol stack to process data packets has become a bottleneck. . In order to meet the low latency and high throughput of users in distributed systems, using user mode protocol stacks and bypassing the kernel is an important direction for high-performance systems. When application layer data is written to user mode protocol stacks, the CPU will have a data Copy operations, the performance loss caused by copying is intolerable in high-performance systems. After the data is copied, the application layer can release the memory or re...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06F9/50G06F13/28
CPCG06F9/5016G06F13/28
Inventor 石博朱凌俊
Owner ALIBABA GRP HLDG LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products