Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Distributed deep learning method based on pipeline annular parameter communication

A parametric communication and deep learning technology, applied in the field of deep learning, can solve the problems of low cluster training speed and long computing time, and achieve the effect of shortening communication time, reducing communication volume and avoiding communication congestion.

Pending Publication Date: 2021-05-28
SUN YAT SEN UNIV
View PDF6 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The general distributed AllReduce algorithm uses a machine to collect the gradient data of each node, and then sends the updated gradient back to each node. However, in this method, the communication time of the gradient collection node increases linearly with the increase in the number of cluster nodes. Defects, there are problems of long calculation time and low cluster training speed

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Distributed deep learning method based on pipeline annular parameter communication
  • Distributed deep learning method based on pipeline annular parameter communication
  • Distributed deep learning method based on pipeline annular parameter communication

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0036] This embodiment proposes a distributed deep learning method based on pipeline ring parameter communication, as shown in Figures 1-2, which is a flow chart of the distributed deep learning method based on pipeline ring parameter communication in this embodiment.

[0037] In the distributed deep learning method based on pipeline ring parameter communication proposed by this embodiment, the following steps are included:

[0038] S1: Obtain a training model, and use the training model to initialize computing nodes in the cluster.

[0039] Before the model starts training, use the locally stored training model to initialize the computing nodes in the cluster, and define the same loss function l, optimizer A, iteration number K, and pipeline dependency value P for each node related to model training. parameter; two tag arrays are defined for each compute node in the cluster with and a model state storage array m; where the tag array Flag corresponding to whether the loc...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

In order to overcome the defects of low cluster training speed and high training time overhead, the invention provides a distributed deep learning method based on pipeline ring parameter communication. The method comprises the following steps: obtaining a training model, and initializing computing nodes in a cluster by adopting the training model; performing distributed training on computational nodes in the cluster by adopting a pipeline stochastic gradient descent method, executing training model updating and gradient calculation, and executing gradient communication in parallel during the period; after the node completes the ith round of gradient calculation locally, compressing gradient data, then starting a communication thread to execute an annular AllReduce operation, and starting the (i + 1)-th round of iterative training at the same time until the iterative training is completed. According to the method, an annular AllReduce algorithm is adopted, the problem of communication congestion of server nodes of an image parameter server framework is avoided through annular communication, and time consumption is reduced through parallel overlapping calculation and communication of a local assembly line.

Description

technical field [0001] The present invention relates to the technical field of deep learning, more specifically, to a distributed deep learning method based on pipeline ring parameter communication. Background technique [0002] Distributed deep learning for cluster parallel computing on multiple machines has gradually become the focus of technological innovation and development. Distributed deep learning requires frequent communication and the exchange of a large amount of data, and the bandwidth of the network interface is limited, resulting in most of the training time of the neural network being used for data transmission. If the GPU is used for acceleration, the communication volume remains unchanged due to the reduction of calculation time. The time proportion of communication consumption will further increase, which becomes a bottleneck restricting the development of parallelization. [0003] For the acceleration of model training, there are currently two main soluti...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/08G06F9/38G06F9/54
CPCG06N3/08G06N3/084G06F9/38G06F9/54
Inventor 谢俊豪杜云飞卢宇彤钟康游郭贵鑫
Owner SUN YAT SEN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products