Method and device for scheduling memory pool in multi-core central processing unit system

A central processing unit and memory pool technology, applied in the computer field, can solve problems such as reducing the timeliness of data processing by multi-core CPU systems and reducing the processing performance of multi-core CPU systems, so as to reduce allocation time, improve performance, and improve timeliness Effect

Inactive Publication Date: 2012-09-12
RUIJIE NETWORKS CO LTD
View PDF4 Cites 19 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0009] It can be seen that in a multi-core CPU system, based on the above-mentioned processing mechanism, each pipeline thread can only be processed serially, and the memory allocation application of most pipeline threads will be in a

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method and device for scheduling memory pool in multi-core central processing unit system
  • Method and device for scheduling memory pool in multi-core central processing unit system
  • Method and device for scheduling memory pool in multi-core central processing unit system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0036] Embodiment 1 of the present invention provides the generation process of the first-level memory pool. The generation process of the first-level memory pool is generally completed in the system initialization stage, and with the operation of the system, the first-level memory pool can be increased or decreased by one level according to the actual situation. class memory pool.

[0037] image 3 It shows a schematic flow chart of generating a primary memory pool. Specifically, generating a primary memory pool mainly includes the following steps 301 and 302:

[0038] Step 301, the memory pool scheduler determines the number of primary memory pools to be generated and the memory size of each primary memory pool.

[0039] Wherein, the number of primary memory pools to be generated may be determined according to the number of concurrent pipeline threads in the multi-core CPU system. Preferably, the number of primary memory pools to be generated may be equal to that of concurr...

Embodiment 2

[0077] Embodiment 2 of the present invention provides a method for scheduling memory pools in a multi-core CPU system. The method mainly utilizes at least two primary memory pools generated in Embodiment 1 to implement memory pool scheduling for concurrent pipeline threads.

[0078] Figure 7 It shows a schematic diagram of the relationship between the primary memory pool, the secondary memory pool, and the pipeline threads when the memory pool scheduler function is integrated with the secondary memory pool. Figure 7 In , taking each pipeline thread corresponding to a first-level memory pool as an example, by Figure 7It can be seen that each pipeline will send a memory allocation application to the secondary memory pool through arrow 1. After the secondary memory pool determines the primary memory pool corresponding to the pipeline thread, it will allocate the primary memory pool to the secondary memory pool through arrows 2 and 3. The corresponding pipeline thread, subsequ...

Embodiment 3

[0096] Corresponding to the method for scheduling a memory pool in a multi-core CPU provided in Embodiment 1 and Embodiment 2 above, Embodiment 3 provides a device for scheduling a memory pool in a multi-core CPU. Figure 10 A schematic diagram showing the structure of the scheduling device of the memory pool in the multi-core CPU, as Figure 10 As shown, the device mainly includes:

[0097] A memory allocation application receiving unit 1001, a memory pool generating unit 1002, and a memory pool scheduling unit 1003;

[0098] in:

[0099] A memory allocation application receiving unit 1001, configured to receive memory allocation applications sent by at least two pipeline threads respectively;

[0100] The memory pool generation unit 1002 is configured to generate at least two first-level memory pools, and distribute each generated first-level memory pool to a pipeline thread, wherein, the buffer units included in the first-level memory pool are obtained from the buffer uni...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a method and a device for scheduling a memory pool in a multi-core central processing unit system. According to a technical scheme, after receiving memory allocation applications respectively sent from at least two production line threads, a memory pool scheduling program carries out memory pool allocation respectively aiming at each received memory allocation application, i.e., a primary memory pool to be pre-allocated to each production line thread sending the memory allocation application is determined from at least two pre-generated primary memory pools; and a buffer unit included in the determined primary memory pool is allocated to the production line thread to call, wherein the buffer unit included in the primary memory pool is scheduled from buffer units included in a secondary memory pool. According to the technical scheme, when multiple production line threads send applications simultaneously, corresponding memory pool resources can be respectively allocated to the multiple production line threads to schedule, so that the data processing timeliness of the multi-core CPU (central processing unit) system is improved.

Description

technical field [0001] The invention relates to the field of computers, in particular to a scheduling method and device for a memory pool in a multi-core CPU system. Background technique [0002] The central processing unit (Central Processing Unit, CPU) mainly implements the processing of the message. Specifically, each time the CPU receives a message, it will allocate a memory space from the system memory to store the message, and After the message is sent, the memory space allocated to the message must be released back to the system memory. The process of a message from being received by the CPU to being sent by the CPU is called a pipeline. [0003] As the number of packets processed by the CPU increases, due to the uncertainty of the time to receive the packets and the size of the memory space required by the packets, the operation of the CPU to allocate and release memory for each packet will consume a large amount of CPU resources. . A common solution to this proble...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06F9/50G06F9/48G06F9/38
Inventor 李磊
Owner RUIJIE NETWORKS CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products