Memory access bifurcation-based GPU (Graphics Processing Unit) kernel program recombination optimization method

A technology of core program and optimization method, applied in the directions of multi-program device, program control device, resource allocation, etc., can solve problems such as low execution efficiency

Inactive Publication Date: 2013-06-12
NAT UNIV OF DEFENSE TECH
View PDF0 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0014] The technical problem to be solved by the present invention is: Aiming at the problem of low execution efficiency of large-scale multi-GPU Kernel applications, a GPU Kerne

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Memory access bifurcation-based GPU (Graphics Processing Unit) kernel program recombination optimization method
  • Memory access bifurcation-based GPU (Graphics Processing Unit) kernel program recombination optimization method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0069] figure 1 In order to fetch the behavior feature table structure, the specific feature table establishment method is as follows:

[0070] Create a memory access behavior feature table for each Kernel function of the GPU program. The memory access behavior feature table contains four fields, namely: thread number Tid, memory type MemT accessed by the thread, data size accessed by the thread, and accessed Storage space logical address Addr. The thread number Tid represents the unique number of the thread in the Kernel function domain; the memory type MemT accessed by the thread represents the memory type accessed by the thread, and the memory type includes global memory Global, shared memory Shared Memory, texture memory Texture Memory and constant memory Constant Memory; The size of the data accessed by the thread Size indicates the number of bytes of storage space occupied by the data accessed by the thread; the logical address Addr of the storage space accessed by the ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a memory access bifurcation-based GPU (Graphics Processing Unit) kernel program recombination optimization method, which aims to improve the executing efficiency and the application program performance of large-scale GPU Kernel. The technical scheme is that a memory access behavior feature list is constructed by using a Create method; the memory access track of each thread in each Kernel function is recorded by using a Record method; next, whether memory access bifurcation occurs in the thread memory access in the GPU thread of the same Kernel function is judged according to a memory access address of the GPU thread in each Kernel function; and then Kernel recombination optimization is performed on the memory access bifurcation-based GPU, and the method comprises two steps of memory access bifurcation-based GPU Kernel split and continuous memory access-based GPU Kernel fusion. By using the method, the problem of low executing efficiency of the large-scale GPU Kernel application can be solved, and the executing efficiency and the application program performance of the large-scale GPU Kernel are improved.

Description

technical field [0001] The invention relates to a method for reorganizing and optimizing a GPU core program (that is, a GPU Kernel), in particular to a method for reorganizing and optimizing a GPU Kernel based on differences in memory access. Background technique [0002] In recent years, GPU (Graphics Processing Unit, Graphics Processing Unit) has been widely used in molecular dynamics simulation, biological origin analysis, meteorological prediction, etc. The field of performance computing. In the face of large-scale GPGPU (General Purpose computing on Graphics Processing Units) application mapping, the standard single-core program (Kernel) mode cannot meet the needs of large-scale applications. [0003] The GPU kernel program (GPU Kernel) is the program segment running on the GPU. Usually, the programmer will transplant the calculation-intensive and time-consuming kernel subroutines in the program to the GPU for acceleration. Such kernel subroutines running on the GPU ar...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06F9/44G06F9/50
Inventor 甘新标刘杰迟利华晏益慧徐涵胡庆丰王志英苏博朱琪刘聪
Owner NAT UNIV OF DEFENSE TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products