Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

GPU box PCIE extended interconnection topology device

A topology and high-speed interconnection technology, applied in the direction of instruments, electrical digital data processing, etc., can solve the problems of huge servers, stuck in server processing performance, inability to adjust the interconnection topology of CPU and GPU, etc., to achieve low transmission delay and good scalability. Effect

Inactive Publication Date: 2018-01-26
ZHENGZHOU YUNHAI INFORMATION TECH CO LTD
View PDF4 Cites 18 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The standard PCIE interface is a common design method for general-purpose servers, but because GPU design requires more power and structural space, when the server itself only has a single GPU, the design is no problem, but it is applied to artificial intelligence and high-performance servers. Now more GPU processors need to be used. The multi-GPU design will make the server larger, and the compatibility with other standard card designs is not good. Similarly, the PCIE structure will become the bottleneck of data exchange processing between GPUs, which is serious. Affect the performance of multiple GPUs under the multi-GPU architecture
[0004] The integrated design of GPU and CPU processors binds the application scenarios of GPU and CPU. Once the application reaches the upper limit of GPU usage, it can only do distributed interconnection solutions through the network, so that the processing performance of the server itself will be stuck in the network bandwidth In terms of delay and delay, it is impossible to improve the performance of the server
[0005] The interconnection architecture between the CPU and the GPU is fixed, and it is impossible to adjust the appropriate interconnection topology between the CPU and the GPU according to different application scenarios to achieve a combination of floating-point operations (GPU advantage) and integer operations (CPU advantage). reasonable configuration

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • GPU box PCIE extended interconnection topology device
  • GPU box PCIE extended interconnection topology device
  • GPU box PCIE extended interconnection topology device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0023] Such as figure 1 As shown, including the GPU box, the CPU server, the first PCIE switch chip PEX9797, the second PCIE switch chip PEX9797, the GPU box includes 8 groups of interconnected GPUs, and the uplink port of the GPU box is configured with two sets of PCIE X16 ports to connect to the CPU server. Port S0 of the first PCIE switch chip PEX9797 is connected to the first high-speed interconnection cable card through PCIE slot1, port S1 of the first PCIE switch chip PEX9797 is connected to the CPU server through a slimline interface, ports S2, Port S4 is connected to the PCIE connection terminal, the PCIE connection terminal is connected to another PCIE connection terminal through 2*PCIE Gen 3X16 bus, the other PCIE connection terminal is connected to two PCIE slot chips through two PCIE X16 buses, and the two PCIE The slot chips are respectively connected to GPU0 and GPU1 through the PCIE X16 bus. Similarly, ports S3 and S5 of the first PCIE switch chip PEX9797 are co...

Embodiment 2

[0029] The difference between Embodiment 2 and Embodiment 1 is that when the upstream port of the GPU box is configured with a set of PCIE X16 ports to connect to the CPU server, the port S0 of the first PCIE switch chip and the port S1 of the second PCIE switch chip are transferred through the PCIE to slimline adapter card Connect to the slimline interface of the CPU server.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a GPU box PCIE extended interconnection topology device. The device comprises a GPU box, wherein the GPU box comprises eight groups of interconnected GPUs, a CPU server connected with the GPUs and a PCIE switch module; the PCIE switch module installs a high-speed interconnected cable card through a PCIE slot to realize interconnection among different GPU boxes; the GPUs areconnected through the PCIE switch module; the CPU server is connected with the GPU box through a PCIE link; the PCIE switch module extends the PCIE link into eight groups of PCIE X16 links to realizeconnection between the CPU server and the eight groups of GPUs. Through the device, independent design of a standard PCIE interface GPU box is realized, maximization of GPU performance is guaranteed,transmission delay is short, extensibility is good, flexible configuration and matching usage can be realized, and GPU box design needed for high performance and artificial intelligence is provided.

Description

technical field [0001] The invention relates to the technical field of server board design, in particular to a GPU box PCIE extended interconnection topology device. Background technique [0002] With the rise of artificial intelligence and high-performance computing, the advantages of GPU (Graphics Processor Unit display processor unit) computing are becoming more and more obvious in high-performance computers. Compared with traditional CPU processors, ultra-high output cores, More suitable for artificial intelligence and high-performance requirements for more parallel computing, GPU servers have become the next rapid growth point of servers. [0003] The current GPU design basically adopts the universal PCIE (peripheral component interconnect express high-speed serial computer expansion bus) slot interface, which is basically integrated into the server, bound to the server itself, and sold as a GPU server or high-performance server. The standard PCIE interface is a common...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F13/40G06F13/42
Inventor 李岩
Owner ZHENGZHOU YUNHAI INFORMATION TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products