Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

GPU core program reorganization and optimization method based on memory access divergence

A core program and optimization method technology, applied in the direction of multi-program device, program control device, resource allocation, etc., can solve the problem of low execution efficiency of GPUKernel

Inactive Publication Date: 2015-11-25
NAT UNIV OF DEFENSE TECH
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0014] The technical problem to be solved by the present invention is: Aiming at the problem of low execution efficiency of large-scale multi-GPUKernel applications, a GPUKernel reorganization optimization method based on memory divergence is proposed to improve the execution efficiency and application performance of large-scale GPUKernels

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • GPU core program reorganization and optimization method based on memory access divergence
  • GPU core program reorganization and optimization method based on memory access divergence

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0069] figure 1 In order to fetch the behavior feature table structure, the specific feature table establishment method is as follows:

[0070] Create a memory access behavior feature table for each Kernel function of the GPU program. The memory access behavior feature table contains four fields, namely: thread number Tid, memory type MemT accessed by the thread, data size accessed by the thread, and accessed Storage space logical address Addr. The thread number Tid represents the unique number of the thread in the Kernel function domain; the memory type MemT accessed by the thread represents the memory type accessed by the thread, and the memory type includes global memory Global, shared memory SharedMemory, texture memory TextureMemory and constant memory ConstantMemory; The data size Size indicates the number of bytes of storage space occupied by the data accessed by the thread; the logical address Addr of the storage space accessed by the thread indicates the address spac...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a memory access bifurcation-based GPU (Graphics Processing Unit) kernel program recombination optimization method, which aims to improve the executing efficiency and the application program performance of large-scale GPU Kernel. The technical scheme is that a memory access behavior feature list is constructed by using a Create method; the memory access track of each thread in each Kernel function is recorded by using a Record method; next, whether memory access bifurcation occurs in the thread memory access in the GPU thread of the same Kernel function is judged according to a memory access address of the GPU thread in each Kernel function; and then Kernel recombination optimization is performed on the memory access bifurcation-based GPU, and the method comprises two steps of memory access bifurcation-based GPU Kernel split and continuous memory access-based GPU Kernel fusion. By using the method, the problem of low executing efficiency of the large-scale GPU Kernel application can be solved, and the executing efficiency and the application program performance of the large-scale GPU Kernel are improved.

Description

technical field [0001] The invention relates to a method for reorganizing and optimizing a GPU core program (ie GPUKernel), in particular to a method for reorganizing and optimizing a GPUKernel based on differences in memory access. Background technique [0002] In recent years, GPU (Graphics Processing Unit, Graphics Processing Unit) has been widely used in molecular dynamics simulation, biological origin analysis, meteorological prediction, etc. field. In the face of large-scale GPGPU (General Purpose Computing on Graphics Processing Units) application mapping, the standard single-core program (Kernel) mode cannot meet the needs of large-scale applications. [0003] The GPU core program (GPUKernel) is the program segment running on the GPU. Usually, the programmer will transplant the calculation-intensive and time-consuming core subroutines in the program to the GPU for acceleration. Such core subroutines running on the GPU are usually called GPUKernel. [0004] In the ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F9/44G06F9/50
Inventor 甘新标刘杰迟利华晏益慧徐涵胡庆丰王志英苏博朱琪刘聪
Owner NAT UNIV OF DEFENSE TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products