Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method for distributing large continuous memory of kernel

A memory and kernel technology, applied in the field of memory management, can solve the problems of long application allocation time, large memory, time-consuming, etc., and achieve the effect of shortening the allocation time

Active Publication Date: 2011-05-11
曙光网络科技有限公司
View PDF5 Cites 11 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Using the usual method, first apply for all available memory blocks, then sort the memory blocks by address, merge continuous area blocks, and find a memory allocation that meets the requirements, which takes a long time
This solution has several disadvantages: 1. A system with a large memory may have a large memory, and it takes a long time to apply for allocation; 2. The allocated memory needs to be sorted and merged according to the address, and the time complexity is about the sorting time of memory blocks. the complexity

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for distributing large continuous memory of kernel
  • Method for distributing large continuous memory of kernel
  • Method for distributing large continuous memory of kernel

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0018] 1. Get the largest memory page order when allocated by the kernel;

[0019] When applying for memory, the maximum amount of memory that can be applied for at one time is the order power of the memory page size. Here, it is ensured that the addresses of each memory block are continuous and that each application for a memory block is the largest memory block that can be applied for.

[0020] 2. Calculate the number of required memory pages N;

[0021] The upper integer of the quotient of the required memory size and the memory page size is taken as the required number of memory pages N.

[0022] 3. Judging the order, if it is negative, go to step 6, otherwise apply for memory.

[0023] The order may decrease as the free memory of the system decreases, and it is -1 when there are no free pages for allocation. When order is a positive number, continue to allocate memory.

[0024] 4. If the memory application fails, the order will be reduced and go to step 3; if the appl...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a method for distributing a large continuous memory of a kernel. The method comprises the following steps of: acquiring a maximum memory page during distribution of the kernel; calculating a required memory page number N; turning to a step F if determining that the is a negative number; applying the memory according to the if determining that the is not a negative number; reducing the and turning to a step C if the application fails; combining combinable memory blocks to serve as the continuous memory if the application is successful; turning to a step E if determining that a newly combined continuous memory which meets requirement exists; turning to the step C if determining that the newly combined continuous memory which meets requirement does not exist; turning to the step F when determining that the application is successful due to the existence of the memory which meets requirement; jumping to the step C if the memory which meets requirement does not exist; and releasing unused memory. By the method, large continuous memory blocks can be quickly distributed from the kernel; and compared with the conventional distribution method, the distribution time is shortened. The method is mainly guaranteed in two aspects: on one hand, a maximum number of distributable memories is used for applying to the kernel each time; and on the other hand, continuous regions are immediately combined and whether the combined memory meets requirement is immediately determined after the memory is distributed each time, so extra application operation is not required.

Description

technical field [0001] The invention relates to the field of memory management, in particular to a fast method for allocating a large continuous memory of a kernel. Background technique [0002] Due to the limited memory resources of the kernel, the effective use of memory is closely related to the functional performance of the system. Different operating systems provide different memory management methods, and different memory management methods lead to different sizes of memory obtained at one time when applying for memory. If a large piece of contiguous memory needs to be obtained, there are also big differences in the way of memory allocation and organization. At present, most large continuous memory allocation methods are to allocate a large amount of memory first, and then select memory blocks that meet the requirements, please refer to the patent CN101676883. Therefore, when the system memory is relatively large, the memory allocation time is longer, which affects t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F12/02
Inventor 刘灿刘朝辉李锋伟纪奎张磊
Owner 曙光网络科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products