Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

GPU instruction submitting server based on service

A server and instruction technology, applied in the direction of processor architecture/configuration, program synchronization, multi-programming device, etc., can solve the problem of unable to submit instructions in time, and achieve the effect of high operation efficiency

Pending Publication Date: 2019-11-26
CHINESE AERONAUTICAL RADIO ELECTRONICS RES INST
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

When a partition needs to send GPU commands, it sends a request to the GPU command submission server, and the GPU commands are submitted to the server for collection and management, and then sent to the GPU for execution. This solves the problem that the traditional method may cause that the command cannot be submitted in time within the time slice. Improve the certainty of instruction scheduling and improve the operating efficiency of the system

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • GPU instruction submitting server based on service

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0027] The present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments.

[0028] The service-based GPU command submission server shown in this embodiment runs in the kernel state of the operating system, and is responsible for receiving command requests sent by partitions, and interpreting and executing them. Each partition no longer directly sends GPU commands to the ringbuffer, but the GPU commands are submitted to the server to interact with the ringbuffer. see figure 1 As shown, the service-based GPU instruction submission server is used to perform the following program steps:

[0029] Step 1: Complete initialization after power on.

[0030] Step 2: Run the system kernel mode, and loop to check whether there is a partition CPU request to submit GPU instructions, and if so, go to step 3.

[0031] Step 3: Receive the GPU instruction submitted by the CPU of the partition. Partitioned CPUs need to submit GPU ins...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a GPU (Graphics Processing Unit) instruction submitting server based on service. The GPU instruction submitting server is used for executing the following program steps: step 1, finishing initialization after electrification; 2, running a system kernel mode, circularly querying whether a partition CPU requests to submit a GPU instruction or not, and if so, entering the step3; 3, receiving a GPU instruction submitted by the partition CPU; and 4, sending the GPU instruction to a command buffer area, updating a corresponding mark, and notifying the GPU to read the instruction. A traditional signal mutual exclusion amount-based mode is changed into a service-based GPU instruction submission mode, so that the operation efficiency, the determinacy and the configurabilityof the system are improved.

Description

technical field [0001] The invention patent belongs to the graphics processing unit (GPU) driving field. Background technique [0002] Graphics processing units (GPUs) are widely used in various fields that require graphics generation and display, such as industry, medical care, and consumer electronics. The most common application scenario is that the central processing unit (CPU) interacts with the GPU through the PCI or PCIE bus for data and commands. , the GPU is responsible for drawing graphics and outputting them. In the process of 3D graphics generation, the GPU driver running on the CPU is responsible for converting the high-level language into GPU instructions that the GPU can understand and execute, and sends the GPU instructions to the GPU for execution according to a certain instruction submission strategy, usually in the CPU memory. Apply for a memory area, generally called a command buffer (ringbuffer), which is used to store GPU instructions, and open this pa...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F9/52G06T1/20
CPCG06F9/52G06T1/20
Inventor 廖科郭凡童歆
Owner CHINESE AERONAUTICAL RADIO ELECTRONICS RES INST
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products