Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A method and device for gpu topology partitioning

A topology and partitioning technology, applied in the computer field, can solve problems such as low efficiency, slowing down the calculation speed of artificial intelligence, and lack of organization in GPU partitioning, so as to improve the calculation speed and reduce the time-consuming transmission time

Active Publication Date: 2022-06-07
INSPUR SUZHOU INTELLIGENT TECH CO LTD
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, most AI R&D personnel lack the underlying knowledge of GPU. The communication between GPUs in the existing technology is inefficient due to the lack of underlying optimization, and the lack of organization of GPU partitions slows down the calculation speed of AI.
[0003] There is no effective solution to the problem of slowing down the calculation speed of artificial intelligence caused by the lack of organization of GPU partition in the existing technology

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A method and device for gpu topology partitioning
  • A method and device for gpu topology partitioning
  • A method and device for gpu topology partitioning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0038] In order to make the objectives, technical solutions and advantages of the present invention more clearly understood, the embodiments of the present invention will be further described in detail below with reference to the specific embodiments and the accompanying drawings.

[0039] It should be noted that all expressions using "first" and "second" in the embodiments of the present invention are for the purpose of distinguishing two entities with the same name but not the same or non-identical parameters. It can be seen that "first" and "second" It is only for the convenience of expression and should not be construed as a limitation to the embodiments of the present invention, and subsequent embodiments will not describe them one by one.

[0040] Based on the above objective, in the first aspect of the embodiments of the present invention, an embodiment of a method for optimizing the topology partitioning of GPUs from the bottom layer is proposed according to different c...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a GPU topology partitioning method and device, including: determining the interconnection bandwidth between multiple GPUs according to the physical topology information of multiple GPUs, and generating a GPU topology map including multiple GPUs; GPUs are randomly divided into two partitions; the migration gain of all GPUs in the GPU topology map is calculated, and the GPU with the highest migration gain in the partition including more GPUs is migrated to the partition including fewer GPUs, and the crossover of the current partitioning scheme is calculated. The number of partition connections, and remove the migrated GPU from the GPU topology map; repeat the previous step until all GPUs in the GPU topology map are removed, and select the partition scheme with the smallest number of cross-partition connections as the partition result. The present invention can specifically optimize the topological partition of the GPU from the bottom layer according to different connection relationships between the GPUs, reduce the time-consuming transmission between the GPUs, and improve the calculation speed of artificial intelligence.

Description

technical field [0001] The present invention relates to the field of computers, and more particularly, to a method and device for GPU topology partitioning. Background technique [0002] In the fields of high-performance computing and artificial intelligence, GPUs are often used for computing acceleration. GPU is used on a large scale due to its powerful computing power and low power consumption, especially in the hot artificial intelligence field in recent years, most model training is run on GPU, which can save a lot of computing time, thus Speed ​​up model iteration. Due to the high cost of GPUs, more and more artificial intelligence developers hope to fully improve the resource utilization of GPUs and maximize the value of GPUs under limited GPU resources. However, most artificial intelligence developers lack the underlying knowledge of GPUs. The communication between GPUs in the existing technology is inefficient due to the lack of underlying optimization, and the lac...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T1/20G06F15/173
CPCG06T1/20G06F15/17356
Inventor 王德奎
Owner INSPUR SUZHOU INTELLIGENT TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products