Asynchronous distributed deep learning training method, device and system

A technology of deep learning and training methods, applied in neural learning methods, transmission systems, special data processing applications, etc., can solve the problems that cannot be directly applied, cannot solve the bottleneck of asynchronous stochastic gradient communication, etc., and achieve the effect of reducing the amount of communication

Inactive Publication Date: 2019-09-17
SUN YAT SEN UNIV
View PDF9 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] In view of the above problems, the purpose of the present invention is to solve the problem that the current gradient sparsification technology cannot be direc

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Asynchronous distributed deep learning training method, device and system
  • Asynchronous distributed deep learning training method, device and system
  • Asynchronous distributed deep learning training method, device and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0064] Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. Although exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited by the embodiments set forth herein. Rather, these embodiments are provided for more thorough understanding of the present disclosure and to fully convey the scope of the present disclosure to those skilled in the art.

[0065] In order to facilitate the description of the parameters used in the embodiments of the present invention, the following descriptions are given first:

[0066] x: model parameter;

[0067] t: model parameter version number t;

[0068] t sync : The parameter version number when the model is soft-synchronized;

[0069] k: the kth computing node;

[0070] gradient vector;

[0071] i-th gradient element;

[0072] σ: co...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an asynchronous distributed deep learning training method, device and system. The method is applied to a computing node k, the computing node performs the forward propagation through the local model parameters and the local data to obtain the loss values, then the gradient updating is obtained via the backward propagation, and only the values with larger information amount in the gradient updating vectors are sent by using the sparse function, so that the communication traffic from the computing node to the parameter server is effectively reduced. On the basic concepts of the asynchronous random gradient descent and a parameter server, the gradient of the computing node is returned via the parameter server, the communication compression is carried out, the compression rate of the communication is effectively guaranteed. The compression rate of communication is continuously increased along with the continuous training, and a better compression effect and a higher communication speed can be achieved.

Description

technical field [0001] The invention relates to the technical field of computer deep learning training, and relates to an asynchronous distributed deep learning training method, device and system. Background technique [0002] Large-scale deep learning models are usually trained in a distributed manner. Generally speaking, the training data of the model is distributed on several computing nodes, and the computing nodes calculate the update value of the model in parallel based on the local data. For example, there are N computing nodes, and one of the nodes may be called k, that is, k∈(1˜N). [0003] For example, the model is updated using gradient descent or stochastic gradient descent algorithms. Synchronous stochastic gradient descent (SSGD) is the most commonly used distributed implementation of stochastic gradient descent, using parameter servers or Allreduce to maintain and update the global model. Synchronous stochastic gradient descent performs strong synchronizati...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06N3/04G06N3/08G06F16/174H04L29/08
CPCG06N3/084G06F16/1744H04L67/025H04L67/1095H04L67/1097G06N3/045
Inventor 颜子杰陈孟强吴维刚肖侬陈志广
Owner SUN YAT SEN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products