Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Federated learning method and system based on batch size and gradient compression ratio adjustment

A learning method and compression rate technology, applied in neural learning methods, neural architecture, biological neural network models, etc., can solve problems such as the influence of model convergence rate, and achieve the effect of ensuring training accuracy, improving convergence rate, and reducing pressure.

Active Publication Date: 2020-07-10
ZHEJIANG UNIV
View PDF6 Cites 38 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, both batch processing and gradient compression will affect the convergence rate of the model.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Federated learning method and system based on batch size and gradient compression ratio adjustment
  • Federated learning method and system based on batch size and gradient compression ratio adjustment
  • Federated learning method and system based on batch size and gradient compression ratio adjustment

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0068] The federated learning method based on adjusting the batch size and gradient compression rate provided by this embodiment is applicable to the scenario where multiple mobile terminals and an edge server connected to a communication hotspot (such as a base station) jointly train an artificial intelligence model. For other wireless communication technologies, They can work in the same working mode, so in this embodiment, the situation of mobile communication technology is mainly considered.

[0069] In this embodiment, each terminal uses a batch method to perform local calculations, and the gradient compression method is quantization. In particular, the quantization adopts fixed-length quantization, and the quantization process is as follows: figure 2 , the gradient information is quantized and encoded by a certain bit, and then transmitted to the edge server as gradient information. When a high quantization bit is used to represent the gradient, the original gradient in...

Embodiment 2

[0097] The adjustment method provided in this embodiment is applicable to the scene where multiple mobile terminals and an edge server connected to a communication hotspot (such as a base station) jointly train an artificial intelligence model. For other wireless communication technologies, they can work in the same working mode, so in In this embodiment, the case of mobile communication is mainly considered.

[0098] In this embodiment, each terminal uses a batch method to perform local calculations, and the gradient compression method is thinning. In particular, the thinning method is to select some relatively large gradients for transmission. The thinning process is as follows: Figure 5 , after the gradient information is sparse, the selected gradient information and its number are transmitted to the edge server. When more gradient information is retained, the loss of gradient information can be reduced, and the amount of transmitted information is also increased; when less ...

Embodiment 3

[0127] Embodiment 3 provides a federated learning system based on adjusting batch size and gradient compression rate, including an edge server connected to the base communication terminal, and multiple terminals wirelessly communicating with the edge server,

[0128] The edge server adjusts the batch size and gradient compression rate of the terminal according to the current batch size and gradient compression rate in combination with the computing power of the terminal and the communication capability between the edge server and the terminal, and calculates the adjusted batch size and gradient compression rate transmitted to the terminal;

[0129] The terminal performs model learning according to the received batch size, and outputs the gradient information obtained by model learning to the edge server after being compressed according to the received extraction compression rate;

[0130] After the edge server averages all the received gradient information, it synchronizes the...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a federated learning method and system based on batch size and gradient compression ratio adjustment, which are used for improving model training performance. The method comprise the following steps: in a federated learning scene, enabling a plurality of terminals to share uplink wireless channel resources; completing the training of a neural network model together with anedge server based on training data of a local terminal; in the model training process, enabling the terminal to calculate the gradient by adopting a batch method in local calculation, and in the uplink transmission process, compressing the gradient before transmission; adjusting the batch size and the gradient compression rate according to the computing power of each terminal and the channel stateof each terminal, so as to improve the convergence rate of model training while ensuring the training time and not reducing the accuracy of the model.

Description

technical field [0001] The invention relates to the fields of artificial intelligence and communication, in particular to a federated learning method and system based on adjusting batch size and gradient compression rate. Background technique [0002] In recent years, with the continuous improvement of hardware and software levels, artificial intelligence (AI) technology has ushered in a peak period of development. It mines key information from massive data to achieve various applications, such as face recognition, speech recognition, data mining, etc. However, for scenarios where data privacy is more sensitive, such as patient information in hospitals, customer information in banks, etc., data is usually difficult to obtain, commonly known as information islands. If the existing artificial intelligence training methods are still used, it is difficult to obtain effective results due to insufficient data. [0003] The federated learning (Federated Learning, FL) proposed by ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/08G06N3/04
CPCG06N3/08G06N3/045
Inventor 刘胜利余官定殷锐袁建涛
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products