Federal learning adaptive gradient quantification method

A quantization method and self-adaptive technology, applied in integrated learning, adjustment of transmission mode, network traffic/resource management, etc., can solve problems such as low-precision quantization gradients, aggravated stragglers, and decreased model accuracy

Active Publication Date: 2021-08-27
UNIV OF ELECTRONICS SCI & TECH OF CHINA
View PDF5 Cites 14 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In the case of a large required global model, network bandwidth limitations and the number of working nodes will exacerbate the communication bottleneck of federated learning, thereby slowing down the overall training process, and heterogeneous and dynamic networks will cause client devices to drop / exit The Straggler Problem
At this time, if the gradient quantization algorithm with uniform precision is adopted, the communication time between the fast and slow nodes will differ greatly, and the process of the fast nodes waiting for the slow nodes to complete the parameter synchronization will cause a lot of waste of computing resources and communication resources, which intensifies the straggler question
At the same time, for nodes with good link status, if the same low-precision quantization gradient is used as nodes with poor link status, the accuracy of the final trained model will also decrease.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Federal learning adaptive gradient quantification method
  • Federal learning adaptive gradient quantification method
  • Federal learning adaptive gradient quantification method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0061] The specific embodiments of the present invention are described below so that those skilled in the art can understand the present invention, but it should be clear that the present invention is not limited to the scope of the specific embodiments. For those of ordinary skill in the art, as long as various changes Within the spirit and scope of the present invention defined and determined by the appended claims, these changes are obvious, and all inventions and creations using the concept of the present invention are included in the protection list.

[0062] like figure 1 , figure 2 As shown, the present invention provides an adaptive gradient quantization method, comprising the following steps S1 to S7:

[0063] S1. Initialize training samples and local models of each working node;

[0064] In this embodiment, the data slices and local models acquired by each working node from the parameter server are initialized, wherein the data slices are used as training samples....

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a federal learning adaptive gradient quantification method, which comprises the following steps of: initializing a training sample and a local model of each working node, training the local model by using the training sample to obtain a local gradient, and quantifying the local gradient according to a quantification level obtained by each working node; uploading the local gradient to a parameter server for gradient aggregation, and transmitting an aggregation result back to each working node; each working node updating local model parameters by using the quantized aggregation gradient; judging whether the number of iterations meets a preset interval time threshold, if so, broadcasting the link state of each working node and timely adjusting the self quantization level, otherwise, entering an iterative training process, and ending training until a preset condition is met. According to the method, the quantized bits of the gradient are adaptively adjusted according to the real-time bandwidth of the node link, the stragler problem is effectively relieved, the bandwidth resource utilization rate is improved on the basis that the communication overhead task is reduced through a traditional quantization method, and more efficient federal learning training is completed.

Description

technical field [0001] The invention relates to the technical field of gradient quantization, in particular to a federated learning adaptive gradient quantization method. Background technique [0002] Due to the continuous expansion of data volume and model scale, traditional machine learning cannot meet the application requirements, so distributed machine learning has become the mainstream. In order to complete multi-machine cooperation, communication between nodes is essential. However, as the scale of the model and neural network becomes larger and larger, the amount of parameters to be transmitted each time is also very large, which may cause the communication time to be too long, and even offset the calculation time saved by parallelism because of the extended communication time. Therefore, how to reduce the communication cost has become a widely researched topic in the field of distributed machine learning. Asynchronous stochastic gradient descent, model compression ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): H04L1/00H04W28/06G06N20/20
CPCH04L1/0006H04W28/06G06N20/20Y02D30/50
Inventor 范晨昱吴昊章小宁李永耀
Owner UNIV OF ELECTRONICS SCI & TECH OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products