Check patentability & draft patents in minutes with Patsnap Eureka AI!

Distributed training method for large-scale deep neural network

A deep neural network and training method technology, applied in the field of distributed training for large-scale deep neural networks, to achieve the effect of high-efficiency expansion

Pending Publication Date: 2021-10-19
ZHEJIANG LAB +1
View PDF0 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] In order to solve the above-mentioned technical problems in the prior art and reduce the communication overhead of distributed deep learning training to make it closer to linear acceleration, the present invention provides a distributed training method for large-scale deep neural networks to solve the current There is a communication bottleneck in the deep learning parallel technology, and the distributed training of the deep learning model is accelerated. The specific technical plan is as follows:

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Distributed training method for large-scale deep neural network
  • Distributed training method for large-scale deep neural network
  • Distributed training method for large-scale deep neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0046] In order to make the purpose, technical solution and technical effect of the present invention clearer, the technical solutions in the embodiments of the present invention will be further clearly and completely described below in conjunction with the accompanying drawings. Obviously, the described embodiments are only the present invention. Some, but not all, embodiments.

[0047] Such as Figure 1-3 As shown, a kind of distributed training method for large-scale deep neural network provided by the present invention comprises the following steps:

[0048] S1: Determine the total number of servers and the number of GPUs available for each machine, build and initialize a deep learning distributed environment, determine the overall BatchSize and learning rate during the training process, and the communication mechanism of all computing nodes during the parameter update phase.

[0049] Specifically, the total number of servers and the number of GPUs available for each serv...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention belongs to the crossing field of high-performance calculation and artificial intelligence, and particularly relates to a distributed training method for a large-scale deep neural network, which specifically comprises the following steps of: realizing overlapping of a communication process and a calculation process through a layer-by-layer scheduling parameter synchronization process and reverse error propagation so as to hide communication overhead and accelerate model training; in addition, in the parameter synchronization process of each layer, dynamically deciding the data to be transmitted according to the sparsity of different data blocks and the data compression overhead to realize finer-grained control of Ring-All-Reduce communication, so that the communication overhead of the parameter synchronization process is minimized, the performance is greatly improved, and under the condition that the model accuracy and the convergence rate are not influenced, distributed training of any deep neural network is close to linear acceleration, and high-efficiency expansion of clusters is facilitated.

Description

technical field [0001] The invention belongs to the intersection field of high-performance computing and artificial intelligence, and in particular relates to a distributed training method for large-scale deep neural networks. Background technique [0002] Deep neural network is one of the most effective technologies of artificial intelligence, which has excellent accuracy and generalization performance in many applications such as image classification, speech recognition, and text processing. In real-world applications, large-scale deep neural networks with tens of millions or even billions of parameters often yield higher accuracy and robustness. With the deepening of the number of deep neural network layers and the expansion of the parameter scale of a single layer, the computing and storage capabilities of a single CPU or GPU and other hardware accelerators gradually cannot meet the training needs. A straightforward way to break this limitation is to use multiple hardwa...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F9/50G06N3/08G06N3/10
CPCG06F9/5027G06F9/5072G06N3/084G06N3/10Y02D10/00
Inventor 刘楚波曾子豪阳王东
Owner ZHEJIANG LAB
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More