Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Distributed ai training topology based on flexible cable connection

An interface and indirect technology, applied in electrical digital data processing, architecture with multiple processing units, program control design, etc., can solve a lot of hardware overhead, clumsiness and other problems

Active Publication Date: 2021-09-21
KUNLUNXIN TECH BEIJING CO LTD
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, neither of these approaches is feasible as they are either clumsy or require significant hardware overhead

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Distributed ai training topology based on flexible cable connection
  • Distributed ai training topology based on flexible cable connection
  • Distributed ai training topology based on flexible cable connection

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0014] Various embodiments and aspects of the disclosure will be described with reference to details discussed below, and the accompanying drawings will illustrate the various embodiments. The following description and drawings are illustrative of the present disclosure and should not be construed as limiting the present disclosure. Numerous specific details are described in order to provide a thorough understanding of various embodiments of the present disclosure. However, in certain instances, well-known or conventional details are not described in order to provide a concise discussion of embodiments of the present disclosure.

[0015] Reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the present disclosure. The appearances of the phrase "in one embodiment" in various places in the specification are not n...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A data processing system includes a central processing unit (CPU) (107,109) and accelerator cards coupled to the CPU (107,109) over a bus, each of the accelerator cards having a plurality of data processing (DP) accelerators to receive DP tasks from the CPU (107,109) and to perform the received DP tasks. At least two of the accelerator cards are coupled to each other via an inter-card connection, and at least two of the DP accelerators are coupled to each other via an inter-chip connection. Each of the inter-card connection and the inter-chip connection is capable of being dynamically activated or deactivated, such that in response to a request received from the CPU (107,109), any one of the accelerator cards or any one of the DP accelerators within any one of the accelerator cards can be enabled or disabled to process any one of the DP tasks received from the CPU (107,109).

Description

technical field [0001] Embodiments of the present disclosure relate generally to machine learning. More specifically, embodiments of the present disclosure relate to artificial intelligence (AI) accelerator chip topologies. Background technique [0002] Distributed AI training requires multiple AI accelerator chips working simultaneously to speed up the entire training process and reduce training time. Therefore, the topology of the AI ​​accelerator chip is needed to coordinate the chip. Depending on training needs, this topology can vary in size from single digits to thousands of AI accelerator chips. Typically, small topologies can be built using printed circuit board (PCB) wiring on a substrate; while large topologies can be built using Ethernet to connect different substrates. However, neither of these approaches is feasible as they are either clumsy or require significant hardware overhead. Contents of the invention [0003] According to the first aspect, some emb...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/04
CPCG06F13/36G06N3/084G06N3/063G06F15/17318G06F15/17306G06F9/5011G06N3/08G06N20/00G06F9/28G06N3/045G06F9/5027G06F15/80
Inventor 朱贺飞欧阳剑赵志彪龚小章陈庆澍
Owner KUNLUNXIN TECH BEIJING CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products