Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Neural network distributed training method for dynamically adjusting Batch-size

A neural network and dynamic adjustment technology, applied in neural learning methods, biological neural network models, neural architectures, etc., can solve problems such as computing power consumption, achieve the effect of reducing synchronization overhead and improving training efficiency

Active Publication Date: 2020-08-07
SICHUAN UNIV
View PDF9 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

A complex neural network generally requires repeated epoch training to achieve the effect. The longer the training period, the more serious the computational power consumption.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Neural network distributed training method for dynamically adjusting Batch-size
  • Neural network distributed training method for dynamically adjusting Batch-size
  • Neural network distributed training method for dynamically adjusting Batch-size

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0062] Please refer to Figure 4 , the present embodiment dynamically adjusts the neural network distributed training method of Batch-size, comprises the following steps:

[0063] S1. Each computing node obtains the neural network after parameter initialization;

[0064] S2. For each computing node, dynamically adjust the Batch-size according to its computing power, and divide and obtain sub-data sample sets according to the cluster training set samples and the adjusted Batch-size;

[0065] S3. For each computing node, divide its local sub-data sample set into several training batch sample sets;

[0066] S4. For each computing node, obtain an unused training batch sample set to train the local neural network, and obtain the trained gradient of the local neural network;

[0067] S5. Collect the gradients trained by the local neural network of all computing nodes;

[0068] S6. Calculate new neural network parameters according to all trained gradients and current neural networ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a neural network distributed training method capable of dynamically adjusting Batch-size. The neural network distributed training method relates to the technical field of distributed training of computer neural networks, and dynamically adjusts the Batch-size and divides subdata sets according to computing power of each computing node in a distributed cluster from the perspective of processing a training data set. Therefore, load balancing processing of the distributed training cluster is realized. According to the distributed neural network training method for dynamically adjusting the Batch-size, the computing power of each computing node can be fully utilized, and the time for each computing node to complete local data set training can further be ensured to be approximately the same, thereby reducing the synchronization overhead of the cluster, improving the distributed training efficiency of the neural network and shortening the neural network training time.

Description

technical field [0001] The invention relates to the technical field of computer neural network distributed training, in particular to a neural network distributed training method for dynamically adjusting Batch-size. Background technique [0002] In the distributed training of neural networks, how each computing node shares and transmits local parameters is the key link of the entire distributed training. At present, the synchronization mechanism is widely used in the parameter synchronization of distributed training of neural networks because of its simple implementation and guaranteed convergence. The specific implementation includes a series of methods such as gradient synchronization, weight synchronization, sparse gradient synchronization, and quantized gradient synchronization. Taking the classic Synchronous Stochastic Gradient Descent (SSGD) algorithm as an example, the synchronous methods are briefly introduced. Different computing nodes have complete model copies an...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06N3/08G06N3/063G06N3/04G06K9/62
CPCG06N3/08G06N3/063G06N3/045G06F18/214
Inventor 吕建成叶庆周宇浩刘权辉孙亚楠彭德中桑永胜彭玺
Owner SICHUAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products