Unlock instant, AI-driven research and patent intelligence for your innovation.

Parallel deep learning training data input method and system based on sequence predictability

A training data, deep learning technology, applied in neural learning methods, data processing input/output process, electrical digital data processing and other directions, can solve the problem of low data input efficiency, achieve faster reading speed, speed up data input The effect of reducing speed and communication overhead

Active Publication Date: 2021-02-19
ZHEJIANG LAB +1
View PDF6 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] In order to solve the problem of low data input efficiency in the large-scale distributed deep neural network training process, the present invention proposes a predictable large-scale deep learning high-efficiency data input method and system based on sample sequences

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Parallel deep learning training data input method and system based on sequence predictability
  • Parallel deep learning training data input method and system based on sequence predictability

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0032] The present invention will be described in detail below according to the accompanying drawings.

[0033] figure 1 For the existing data prefetching method, assuming that the number of training nodes is M, then each training node first divides all training data into M groups, and caches data from the parallel file system to the local. In order to facilitate metadata management, the data during training will no longer be moved between training nodes. The data prefetched from the underlying file system or cached into the cache does not match the requirements of the upper-level training tasks, reducing the cache hit rate and resulting in low data input efficiency; in addition, in the multiple iterations of each training round, when When a node requests several data from the same remote node multiple times, it will not merge the requests and directly initiate multiple requests for small data.

[0034] Based on this, the present invention designs a high-efficiency data inpu...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a parallel deep learning training data input method based on sequence predictability. According to the method provided by the invention, during data pre-fetching and caching, the characteristic that the access sequence of the data can be determined in advance is fully utilized, the size of a pre-fetched data block when the data is pre-fetched from the underlying parallel file system is determined in combination with the cache hit rate and the disk access performance, and then data allocation and caching performed, so that the local hit rate of the first round of training in large-scale training is greatly improved. Data request combination is adopted in the next round of training, and cache replacement is performed in advance according to the data to be used in thenext round, so that the communication overhead of the whole distributed training process is reduced, and the data input speed of each node is increased. The invention further provides a data input system based on the method, the system comprises a random sequence generation module, a data prefetching module and a cache replacement module, and the speed of reading the data from the storage can be increased under the condition that the requirement of randomly reading the global data is met.

Description

technical field [0001] The invention belongs to the field of computer science and artificial intelligence, in particular to the field of data input acceleration in large-scale distributed neural network training scenarios. Background technique [0002] In order to train a deep neural network with higher prediction accuracy and stronger generalization, the amount of training data used by people is increasing. Therefore, distributed storage of training data has become a necessary solution. A large number of studies have focused on the calculation process and communication process of distributed training, making the calculation and communication of large-scale distributed neural network training very efficient, but when there are many distributed training nodes, the speed of data supply will become a constraint The key factor in the whole training process. [0003] When the traditional parallel file system cannot meet the demand of this I / O speed, some studies have proposed to...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F3/06G06N3/04G06N3/08
CPCG06F3/061G06F3/064G06F3/0656G06F3/0676G06N3/08G06N3/045
Inventor 何水兵陈伟剑杨斯凌陈平陈帅犇曾令仿任祖杰杨弢
Owner ZHEJIANG LAB