GPU (Graphics Processing Unit) main memory access management method and system capable of remapping

An access management and remapping technology, applied in the computer field, can solve problems such as affecting efficiency, and achieve the effect of reducing the complexity of mapping, improving the efficiency of GPU access to main memory, and improving the efficiency of mapping

Pending Publication Date: 2022-05-10
NO 709 RES INST OF CHINA SHIPBUILDING IND CORP
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The former scheme has restrictions on accessing the main memory, and the latter scheme needs to add a level of mapping, which affects efficiency

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • GPU (Graphics Processing Unit) main memory access management method and system capable of remapping
  • GPU (Graphics Processing Unit) main memory access management method and system capable of remapping
  • GPU (Graphics Processing Unit) main memory access management method and system capable of remapping

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0035] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.

[0036] like figure 1 As shown, the embodiment of the present invention provides a kind of remappable GPU main memory access management method, and described method comprises the following steps:

[0037] First, the main memory space is divided into two pools according to the range of the GPU main memory address space, which are defined as the pass-through pool and the mapped pool. The size of the pass-through pool is the size of the GPU main memory address space, and starts from address 0. The remaining main memory space is the mapped pool.

[0038] The current mainstream CPU platforms all have ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to the technical field of computers, in particular to a remappable GPU (Graphics Processing Unit) main memory access management method and system, which comprises the following steps of: dividing a main memory address space into two pools, namely a straight-through pool and a mapping pool, according to a preset range of the main memory address space required by a GPU; space allocation is carried out from the straight-through pool according to the continuity and size of a main memory address space needing to be allocated by the GPU; distributing a continuous main memory space and a physical address or a discontinuous main memory space and a physical address linked list to the GPU; if space allocation is performed from the mapping pool, allocating continuous main memory space and physical addresses or discontinuous main memory space and physical addresses to the GPU by configuring ATU mapping; establishing a mapping relation between the GPU virtual address and the GPU physical address and a mapping relation between the GPU physical address and the GPU physical address chain table through the GMMU page table; and according to the GPU virtual address required to be accessed, the GPU performs main memory access through the mapping relation. According to the method, the mapping complexity is reduced, and the mapping efficiency and the main memory access efficiency of the GPU are improved.

Description

technical field [0001] The present invention relates to the field of computer technology, in particular to a remappable GPU (Graphics Processing Unit, graphics processor) main memory access management method and system. Background technique [0002] When GPU is used for graphics rendering, the rendering data is generally stored in the video memory of the GPU and / or in the main memory of the CPU (Central Processing Unit, central processing unit). During the rendering process, the CPU can send rendering instructions to the GPU to render The instruction includes the storage address of the data in the video memory and the main memory, and the GPU can read the corresponding stored data according to the storage address, and render the data to obtain the rendered image. In general, GPU uses video memory more efficiently and accesses it faster. However, for some special application scenarios, such as the exhaustion of video memory, or the space that requires frequent CPU operations...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06F9/50G06F12/02
CPCG06F9/5027G06F9/5016G06F12/023
Inventor 陈斌彬付秋高齐
Owner NO 709 RES INST OF CHINA SHIPBUILDING IND CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products