Parallel deep learning training data input method and system based on sequence predictability
A training data, deep learning technology, applied in neural learning methods, data processing input/output process, electrical digital data processing and other directions, can solve the problem of low data input efficiency, achieve faster reading speed, speed up data input The effect of reducing speed and communication overhead
- Summary
- Abstract
- Description
- Claims
- Application Information
AI Technical Summary
Problems solved by technology
Method used
Image
Examples
Embodiment Construction
[0032] The present invention will be described in detail below according to the accompanying drawings.
[0033] figure 1 For the existing data prefetching method, assuming that the number of training nodes is M, then each training node first divides all training data into M groups, and caches data from the parallel file system to the local. In order to facilitate metadata management, the data during training will no longer be moved between training nodes. The data prefetched from the underlying file system or cached into the cache does not match the requirements of the upper-level training tasks, reducing the cache hit rate and resulting in low data input efficiency; in addition, in the multiple iterations of each training round, when When a node requests several data from the same remote node multiple times, it will not merge the requests and directly initiate multiple requests for small data.
[0034] Based on this, the present invention designs a high-efficiency data inpu...
PUM
Login to View More Abstract
Description
Claims
Application Information
Login to View More 

