Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Collaborative scheduling method and system based on GPGPU system structure

A technology of collaborative scheduling and architecture, applied in the field of high-performance computing, can solve problems such as the optimization of the scheduling strategy in the Fetch stage without mentioning it, and achieve the effect of strong usability and practicability, reducing possibilities, and improving performance

Active Publication Date: 2015-05-20
SHENZHEN INST OF ADVANCED TECH CHINESE ACAD OF SCI
View PDF5 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] In the existing technology, the optimization of the GPGPU Warp scheduling strategy is basically studying how to optimize the memory access or Cache direction in the Issue phase, without mentioning the optimization of the scheduling strategy in the Fetch phase, which is adopted by default in the Fetch phase. Simple round robin (Lrr) scheduling strategy

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Collaborative scheduling method and system based on GPGPU system structure
  • Collaborative scheduling method and system based on GPGPU system structure
  • Collaborative scheduling method and system based on GPGPU system structure

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0031] figure 1 It shows the implementation process of the collaborative scheduling method based on the GPGPU architecture provided by Embodiment 2 of the present invention, and the method process is described in detail as follows:

[0032] In step S101, the two priority scheduling queues in the Issue stage are merged into one priority scheduling queue, and the merged priority scheduling queue is used as the priority scheduling queue in the Fetch stage;

[0033] In step S102, in the Fetch stage, an instruction is obtained from the merged priority scheduling queue;

[0034] In step S103, the acquired instruction is decoded;

[0035] In step S104, in the Issue stage, the decoded instructions are executed in parallel by the two schedulers in the Issue stage, and the decoded instructions are launched according to the respective scheduling policies of the schedulers;

[0036] In step S105, execution starts after the issued instruction enters the pipeline;

[0037] In step S106, ...

Embodiment 2

[0041] image 3 The composition structure of the cooperative scheduling system based on the GPGPU architecture provided by the second embodiment of the present invention is shown. For the convenience of description, only the parts related to the embodiment of the present invention are shown.

[0042] The collaborative scheduling system based on the GPGPU architecture can be a software unit, a hardware unit, or a combination of software and hardware built into terminal devices (such as personal computers, notebook computers, tablet computers, smart phones, etc.), or as an independent pendant Integrate into the terminal device or the application system of the terminal device.

[0043] The collaborative scheduling system based on the GPGPU architecture includes:

[0044] The merging unit 31 is used to merge the two priority scheduling queues in the Issue stage into one priority scheduling queue, and use the merged priority scheduling queue as the priority scheduling queue in the...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention belongs to the technical field of high-performance computers and provides a collaborative scheduling method and system based on a GPGPU system structure. The collaborative scheduling method comprises the steps that two priority scheduling queues are combined into one priority scheduling queue in an Issue stage, and the priority scheduling queue formed after combination is used as a priority scheduling queue in a Fetch stage; in the Fetch stage, instructions are obtained from the priority scheduling queue formed after combination; the obtained instructions are decoded; in the Issue stage, parallel execution of the decoded instructions is conducted through two schedulers in the Issue stage, and the decoded instructions are transmitted according to the respective scheduling strategies of the schedulers; after the transmitted instructions enter an assembly line, execution is conducted; an execution result of the instructions are written to a specific position. By the adoption of the collaborative scheduling method and system based on the GPGPU system structure, the GPGPU performance can be effectively improved.

Description

technical field [0001] The invention belongs to the technical field of high-performance computing, and in particular relates to a collaborative scheduling method and system based on the GPGPU architecture. Background technique [0002] General Purpose Graphic Processing Unit (GPGPU) is a high-performance, parallel computing processor. [0003] From the perspective of hardware resources, take NVIDIA's Fermi architecture GPGPU as an example. It is a separate larger board on the hardware, which is connected to the host system by a PCI slot. Microscopically, GPGPU contains several SMs. SM means Streaming MultiProcessor in Chinese, which means stream multiprocessor. Each SM is an independent action unit on the hardware. Each SM can contain several SPs. SP refers to Scalar Processor, which is a computing unit of hardware. In addition to SP, the computing unit also includes several SFU computing units. SFU is a component used to complete special computing functions. In addition t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F9/50
Inventor 张洪亮喻之斌冯圣中
Owner SHENZHEN INST OF ADVANCED TECH CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products