Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Dynamic memory allocation method and device based on GPU (Graphics Processing Unit) and memory linked list

A technology of dynamic allocation and memory chaining, which is applied in the directions of memory address/allocation/relocation, resource allocation, multi-programming device, etc. It can solve the problems of large amount of calculation, inability to realize dynamic allocation of memory, high computational complexity, etc. The effect of small memory

Active Publication Date: 2021-08-24
ZWCAD SOFTWARE CO LTD
View PDF8 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] (1) CUDA can provide corresponding functions to realize dynamic allocation of memory in the GPU, but CUDA is only applicable to Nvidia graphics cards;
[0004] (2) Although OpenCL can be used on graphics cards of Intel, Nvidia and AMD, the code running on the GPU cannot realize the dynamic allocation of memory. Therefore, the application range of OpenCL is very small;
However, most of the current dynamic memory allocation methods have problems such as high computational complexity and large amount of calculation, which cannot be adapted to various complex algorithms in the GPU system, and will also affect the operation of the parallel computing architecture in the GPU system.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Dynamic memory allocation method and device based on GPU (Graphics Processing Unit) and memory linked list
  • Dynamic memory allocation method and device based on GPU (Graphics Processing Unit) and memory linked list
  • Dynamic memory allocation method and device based on GPU (Graphics Processing Unit) and memory linked list

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0056] The present invention provides a preferred embodiment, a GPU-based memory dynamic allocation method, which can be used for memory dynamic allocation of OpenGL, a parallel computing architecture in a GPU system, to be used in combination with mainstream graphics cards to realize a general-purpose parallel computing architecture of GPU. The memory dynamic allocation method provided by the invention has the characteristics of low calculation complexity and small calculation amount, and is more in line with system requirements.

[0057] Preferably, the memory dynamic allocation method in this embodiment mainly includes three memory operations on memory allocation: a memory initial allocation process (allocate), a memory release process (free) and a memory reallocation process (reallocate). The present invention is described in detail according to different memory operations: as Figure 4 As shown, the initial memory allocation process includes:

[0058] Step S11, generate ...

Embodiment 2

[0135] Preferably, based on the OpenGL-based dynamic memory allocation method provided by the present invention, the present invention also provides an OpenGL-based memory dynamic allocation device, including a memory, a processor, and a computer program stored on the memory and run on the processor, The computer program is a dynamic memory allocation program, and the steps of implementing the initial memory allocation process when the processor executes the memory dynamic allocation program are as follows:

[0136] Initialization step: the system initializes and generates multiple storage cache objects, and uses one of the storage cache objects as a storage allocation lock and records it as the first storage cache object, and records the remaining storage cache objects as the second storage cache object; Two storage cache objects are all available memory blocks for memory allocation in the system;

[0137] Element setting step: Obtain the memory size and the minimum memory un...

Embodiment 3

[0162] Preferably, based on the OpenGL-based dynamic memory allocation method provided by the present invention, the present invention also provides a storage medium, which is a computer-readable storage medium on which a computer program is stored, and the computer program is a memory dynamic Allocation program, the steps of implementing the initial memory allocation process when the memory dynamic allocation program is executed by the processor; the steps of the initial memory allocation process specifically include:

[0163] Initialization step: the system initializes and generates multiple storage cache objects, and uses one of the storage cache objects as a storage allocation lock and records it as the first storage cache object, and records the remaining storage cache objects as the second storage cache object; Two storage cache objects are all available memory blocks for memory allocation in the system;

[0164] Element setting step: Obtain the memory size and the minim...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a dynamic memory allocation method based on a GPU (Graphics Processing Unit), which comprises the following steps of: recording the size of a memory block and the position of a first available memory in a system through a set memory linked list management array so as to search a corresponding memory block which can be used for allocation; and then, according to the size of the memory needing to be allocated and the information of the memory blocks which can be used for allocation, setting parameters of the data structure of the memory blocks needing to be allocated so as to apply for the memory from the system and realize dynamic allocation of the memory. The new memory linked list structure provided by the invention has the characteristics of small occupied memory and the like, and meanwhile, the dynamic memory allocation method disclosed by the invention can be applied to internal allocation in a parallel computing architecture OpenGL of a GPU processor so as to adapt to parallel computing. The invention further discloses a dynamic memory allocation device based on the GPU and a storage medium.

Description

technical field [0001] The invention relates to a parallel computing architecture of a graphics processor, in particular to a GPU-based memory dynamic allocation method, device, storage medium and memory linked list. Background technique [0002] When a parallel computing architecture cannot provide the function of dynamic memory allocation, it cannot be regarded as realizing parallel computing. As for the parallel computing framework of GPU (Graphics Processing Unit, graphics processor), there are mainly CUDA (a parallel computing framework launched by NVIDIA that can only be used for its own GPU), OpenCL (Open Computing Language, open computing language) and OpenGL ( Open Graphics Library, open graphics library), each has its own advantages and disadvantages in terms of algorithm operations, as follows: [0003] (1) CUDA can provide corresponding functions to realize dynamic allocation of memory in the GPU, but CUDA is only applicable to Nvidia graphics cards; [0004] (...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F9/50G06F12/02G06F12/0806G06F12/0877
CPCG06F9/5016G06F9/5022G06F12/0238G06F12/0806G06F12/0877
Inventor 陈棋江刘玉峰李会江冯征文何洪举甘文峰
Owner ZWCAD SOFTWARE CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products