Gradient compression method for distributed DNN training in edge computing environment

A technology of edge computing and compression method, applied in the fields of distributed DNN training, gradient compression, and adaptive sparse ternary gradient compression, which can solve the problems of poor communication efficiency and model accuracy optimization, and achieve the effect of reducing communication costs.

Active Publication Date: 2021-10-01
HOHAI UNIV +1
View PDF3 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Since quantization and sparsification are at the expense of model accuracy in exchange for communication efficiency, gradient compression schemes with different degrees of quantization and sparsification will have significant differences. Poor accuracy optimization

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Gradient compression method for distributed DNN training in edge computing environment
  • Gradient compression method for distributed DNN training in edge computing environment
  • Gradient compression method for distributed DNN training in edge computing environment

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0047] Below in conjunction with specific embodiment, further illustrate the present invention, should be understood that these embodiments are only used to illustrate the present invention and are not intended to limit the scope of the present invention, after having read the present invention, those skilled in the art will understand various equivalent forms of the present invention All modifications fall within the scope defined by the appended claims of the present application.

[0048]In this embodiment, an adaptive sparse ternary gradient compression method for distributed DNN training in a multi-edge computing environment. By establishing selection criteria based on the number of gradients, designing an entropy-based sparse threshold selection algorithm, introducing gradient residuals and momentum corrections to optimize sparse compression resulting in loss of model accuracy, combined with ternary gradient quantization and lossless coding techniques, effectively reducing...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a gradient compression method for distributed DNN training in an edge computing environment. The method comprises the steps: building a selection standard based on a gradient number, and screening a gradient network layer which meets a model compression standard; evaluating gradient importance according to the gradient entropy, adaptively selecting a gradient sparsification threshold value, and performing gradient sparsification compression based on the flexible threshold value; according to gradient residual errors and a momentum correction mechanism, accumulating and optimizing the gradient residual errors, thereby reducing the performance loss of a training model caused by gradient sparsity; quantizing the sparse gradient according to a ternary quantization compression scheme to obtain a sparse ternary tensor; and according to a lossless coding technology, recording a distance of a non-zero gradient in the transmission tensor, performing optimization coding on the distance, and outputting a sparse ternary gradient. According to the sparse ternary gradient compression algorithm based on the gradient number and the gradient entropy, the gradient size of a gradient exchange stage in distributed DNN training can be adaptively compressed; therefore, the communication efficiency of distributed DNN training is effectively improved.

Description

technical field [0001] The invention relates to a gradient compression method for distributed DNN training in an edge computing environment, in particular to an adaptive sparse ternary gradient compression method for distributed DNN training in an edge computing environment, belonging to the technical field of edge computing. Background technique [0002] With the rapid development of artificial intelligence, Deep Neural Network (DNN) is widely used in various fields of intelligence, including computer vision, natural language processing and big data analysis. In every domain, the high accuracy of deep learning comes at the cost of high computational and storage requirements during the training phase. However, DNN models need to iteratively optimize millions of parameters over multiple time periods, which makes training deep neural network models both time-consuming and computationally expensive. Edge computing can meet the training requirements of DNN to a certain extent, ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06F9/50G06N3/04G06N3/08
CPCG06F9/5072G06N3/08G06N3/045
Inventor 毛莺池吴俊聂华黄建新徐淑芳屠子健戚荣志郭宏乐
Owner HOHAI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products