A distributed depth learning method and system based on a data parallel strategy

A deep learning and distributed technology, applied in the field of deep learning training systems, can solve problems such as poor scalability and flexibility of distributed training, inability to effectively reduce network communication overhead, and lack of cluster resource management functions to achieve scalability. Strong performance, high training efficiency, simple and flexible interface

Active Publication Date: 2018-12-18
UNIV OF ELECTRONICS SCI & TECH OF CHINA
View PDF6 Cites 50 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although PyTorch has an interface for distributed training, its programming needs to configure node clusters, which is relatively complicated and has no cluster resource management function. When adding new nodes or adding new computing re...

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A distributed depth learning method and system based on a data parallel strategy
  • A distributed depth learning method and system based on a data parallel strategy

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0038] In order to make the objectives, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail below in conjunction with the embodiments and the accompanying drawings.

[0039] For large-scale data training, the present invention mainly adopts a data parallel strategy, combined with a parameter server to realize distributed deep learning. In the present invention, each selected working node trains partial data and maintains a local neural network model, and the parameter server receives the updated information of the working node model, and updates and maintains the global neural network model through related algorithms.

[0040] See figure 1 , The deep learning system of the present invention mainly includes big data distributed processing engine Spark, PyTorch deep learning training framework, lightweight Web application framework Flask, urllib2 module, pickle module, parameter setting module and data conversi...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a distributed depth learning method and a system based on a data parallel strategy. The system of the invention comprises a distributed computing framework Spark, a PyTorch depth learning framework, a lightweight Web application framework Flask, a pickle, a urllib2 and other related components. The Spark framework provides the functions of cluster resource management, datadistribution and distributed computing. The PyTorch Deep Learning Framework provides the interface defined by neural network and the function of upper layer training and computation of neural network;the flask framework provides parameter server functionality; the urllib2 module is responsible for providing the network communication function between the working node and the parameter server node;Pickle is responsible for serializing and deserializing the parameters in the neural network model for transmission over the network. The invention effectively combines PyTorch and Spark, decouples PyTorch and bottom distributed cluster through Spark, absorbs respective advantages, provides convenient training interface, and efficiently realizes distributed training process based on data parallelism.

Description

Technical field [0001] The invention relates to a deep learning training system, in particular to a distributed deep learning method and system based on a data parallel strategy. Background technique [0002] In recent years, with the advent of big data, and the rapid development of artificial intelligence, especially deep learning, deep neural network models trained on big data sets have achieved breakthrough improvements and extensive applications in many fields, including speech recognition and Image recognition to natural language processing, etc. Deep learning improves its ability by continuously seeking derivation and iterative updating of the model. It requires a lot of calculation and is a typical computationally intensive task. Therefore, the training process of these neural networks is very time-consuming. Although GPU (graphics processing unit) hardware technology, network model structure and training methods have made some progress in recent years, the fact that sing...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06F9/38G06N3/04G06N3/08
CPCG06F9/3885G06N3/08G06N3/045
Inventor 李明侯孟书詹思瑜董浩王瀚席瑞董林森
Owner UNIV OF ELECTRONICS SCI & TECH OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products