Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Distributed Training Method of Neural Networks with Dynamically Adjusting Batch-size

A neural network and dynamic adjustment technology, applied in neural learning methods, biological neural network models, neural architectures, etc., can solve problems such as computing power consumption, achieve the effect of reducing synchronization overhead and improving training efficiency

Active Publication Date: 2022-07-01
SICHUAN UNIV
View PDF9 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

A complex neural network generally requires repeated epoch training to achieve the effect. The longer the training period, the more serious the computational power consumption.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Distributed Training Method of Neural Networks with Dynamically Adjusting Batch-size
  • A Distributed Training Method of Neural Networks with Dynamically Adjusting Batch-size
  • A Distributed Training Method of Neural Networks with Dynamically Adjusting Batch-size

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0062] Please refer to Figure 4 , the neural network distributed training method for dynamically adjusting Batch-size in this embodiment includes the following steps:

[0063] S1. Each computing node obtains the neural network after parameter initialization;

[0064] S2. For each computing node, dynamically adjust the Batch-size according to its computing power, and divide the sub-data sample set according to the cluster training set samples and the adjusted Batch-size;

[0065] S3. For each computing node, divide its local sub-data sample set into several training batch sample sets;

[0066] S4. For each computing node, obtain an unused training batch sample set to train the local neural network, and obtain the trained gradient of the local neural network;

[0067] S5. Collect the gradients trained by the local neural network of all computing nodes;

[0068] S6. Calculate the new neural network parameters according to all the trained gradients and the current neural netwo...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a neural network distributed training method for dynamically adjusting Batch-size, which relates to the technical field of computer neural network distributed training. The method starts from the perspective of processing training data sets, for each computing node in a distributed cluster , which dynamically adjusts the Batch‑size and the division of subdatasets according to its computational power. In this way, the load balancing processing of the distributed training cluster is realized. The distributed neural network training method of dynamically adjusting Batch-size can not only make full use of the computing power of each computing node, but also ensure that each computing node takes approximately the same time to complete the training of the local dataset, thereby reducing the synchronization overhead of the cluster and improving the distribution of the neural network. It can improve the training efficiency and reduce the training time of neural network.

Description

technical field [0001] The invention relates to the technical field of computer neural network distributed training, in particular to a neural network distributed training method for dynamically adjusting Batch-size. Background technique [0002] In the distributed training of neural network, how each computing node shares and transmits local parameters is the key link of the whole distributed training. At present, the synchronization mechanism is widely used for parameter synchronization of distributed training of neural networks because of its simple implementation and guaranteed convergence. The specific implementation includes a series of methods such as gradient synchronization, weight synchronization, sparse gradient synchronization, and quantitative gradient synchronization. Taking the classic Synchronous Stochastic Gradient Descent (SSGD) algorithm as an example, the synchronization method is briefly introduced. Different computing nodes have a complete model copy an...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06N3/08G06N3/063G06N3/04G06K9/62
CPCG06N3/08G06N3/063G06N3/045G06F18/214
Inventor 吕建成叶庆周宇浩刘权辉孙亚楠彭德中桑永胜彭玺
Owner SICHUAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products