Method and system for distributed deep learning parameter quantification communication optimization

A deep learning and distributed technology, applied in the field of deep learning, can solve problems such as the limitation of distributed deep learning model training speed, achieve the effect of reducing the amount of communication data, increasing the training speed, and reducing the impact

Active Publication Date: 2019-04-16
HUAZHONG UNIV OF SCI & TECH
View PDF4 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0019] In view of the above defects or improvement needs of the prior art, the present invention provides a distributed deep learning parameter quantization communication optimization method and system based on discrete cosine transform, thereby solving the problem that the existing distributed deep learning model training speed has certain limitations technical problem

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method and system for distributed deep learning parameter quantification communication optimization
  • Method and system for distributed deep learning parameter quantification communication optimization
  • Method and system for distributed deep learning parameter quantification communication optimization

Examples

Experimental program
Comparison scheme
Effect test

specific Embodiment approach

[0058] In the embodiment of the present invention, the gradient DCT transformation and quantization transmission methods are as Figure 7 , its specific implementation is as follows:

[0059] S2.1: The number of floating-point numbers contained in the known Gradient is Num, which is stored on the GPU when it is calculated. According to the number of floating-point numbers contained in the Gradient is Num, record the square matrix G 1 ={Gradient i , 0i Represents the i-th floating-point number in the Gradient. Remember the square matrix G 2 ={Gradient i , n*nk (0≤k

[0060] S2.2: according to the number of square matrices Num / (n*n...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a method and system for distributed deep learning parameter quantization communication optimization based on discrete cosine transform, and the method comprises the steps: carrying out the discrete cosine transform of a gradient value in the distributed deep learning, carrying out the compression processing, carrying out the inverse quantization operation when the weight isupdated, and forming a distributed deep learning system with the high communication efficiency. Before a working node sends gradient data to a parameter server, gradient values are processed by adopting gradient division, GPU parallel computing, discrete cosine transform and quantization and high-frequency filtering compression methods, and then the processed gradient values are pushed to the parameter server. And the working node obtains the weight from the parameter server through a pull operation, and then updates the weight on the current working node by adopting inverse discrete cosine transform and inverse quantization and error compensation updating methods. According to the method, the communication efficiency between the working node and the parameter server in an existing distributed deep learning framework can be effectively improved, and the model training speed is increased.

Description

technical field [0001] The invention belongs to the technical field of deep learning, and more specifically relates to a distributed deep learning parameter quantization communication optimization method and system based on discrete cosine transform. Background technique [0002] Deep Neural Network (DNN) is an artificial neural network (Artificial Neural Network, ANN) composed of an input layer, multiple hidden layers and an output layer. Each layer is composed of multiple neuron nodes. The neuron nodes of the front layer and the back layer are connected to each other, and each connection corresponds to a weight parameter. like figure 1 As shown, layer1 represents the input layer, layer4 represents the output layer, layer2 and layer3 represent the hidden layer, and the connection between neurons corresponds to a weight parameter Among them, l represents the l-th layer, j represents the j-th neuron in the layer before l, and k represents the k-th neuron in the l-layer. T...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/04
CPCG06N3/045
Inventor 蒋文斌金海祝简马阳刘博彭晶刘湃
Owner HUAZHONG UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products