Efficient asynchronous federated learning method for reducing communication times

A technology of communication frequency and learning method, which is applied in the field of efficient asynchronous federated learning, can solve the problems of large communication volume and model parameters, and achieve the effects of reducing communication times, fast convergence, and accelerated training

Inactive Publication Date: 2021-07-09
HARBIN UNIV OF SCI & TECH
View PDF3 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

At the same time, in the process of federated learning, the large amount of model parameters leads to huge communication traffic during remote device training, which has always been a problem that plagues federated learning.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Efficient asynchronous federated learning method for reducing communication times
  • Efficient asynchronous federated learning method for reducing communication times
  • Efficient asynchronous federated learning method for reducing communication times

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0042] In order to clearly and completely describe the technical solutions in the embodiments of the present invention, the present invention will be further described in detail below in conjunction with the drawings in the embodiments.

[0043] The high-efficiency asynchronous federated learning flow chart of the embodiment of the present invention, such as figure 1 shown, including the following steps:

[0044] Step 1 deploys the model, and the parameter server pulls users to participate in training.

[0045] Step 1-1 When the training starts, deploy the model to be trained on the parameter server, and set the model version number to 0. This experiment uses a simple convolutional neural network for 10 classifications on the MNIST dataset, and evenly distributes the 60,000 training samples of the MNIST dataset to 100 users participating in federated learning.

[0046] Step 1-2 In this experiment, the number of users participating in federated learning is set to 100, and 10 ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to an efficient asynchronous federated learning method for reducing communication times. The method comprises the following steps: firstly, designing a hyper-parameter r which adaptively changes along with version obsolescence, reducing errors brought to asynchronous federal learning by the version obsolescence, and guiding model convergence; and to solve the large federated learning communication traffic, increasing the learning rate and the decreasing the local round number in the early stage, and then gradually reducing the learning rate to increase the local round number, so that the performance of the model can be basically unchanged under the condition of effectively reducing the total communication round number of model training, and the system can better carry out asynchronous federated learning.

Description

Technical field: [0001] The invention relates to an efficient asynchronous federated learning method for reducing communication times, and the method has good application in the federated learning field. Background technique: [0002] In the field of asynchronous federated learning, the version staleness of the local model will cause certain errors or even errors for the update of the global model. Controlling the version staleness can reduce the error caused by the version staleness and guide the model to converge. At the same time, in the process of federated learning, the large amount of model parameters leads to huge communication traffic during remote device training, which has always been a problem that plagues federated learning. Generally, people will use methods such as model distillation to reduce the amount of model parameters and reduce the amount of information in each communication. [0003] How to control the traffic in the process of federated learning and c...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62G06N3/04G06N3/08G06N20/00
CPCG06N20/00G06N3/08G06N3/045G06F18/214
Inventor 李子祺罗智勇刘光辉
Owner HARBIN UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products