Federal learning global model training method based on differential privacy and quantification

A differential privacy and global model technology, applied in the field of data processing, can solve the problems of high communication cost, high computing overhead, privacy leakage, etc., and achieve the effect of improving communication efficiency and reducing communication bandwidth

Pending Publication Date: 2021-11-02
XIDIAN UNIV
View PDF1 Cites 18 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The purpose of the present invention is to address the deficiencies of the above-mentioned existing technologies, and propose a federated learning global model training method based on differential privacy and quantization, which is used to solve the privacy leakage, large communication cost and computational overhead when uploading local model gradients in federated learning big problem

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Federal learning global model training method based on differential privacy and quantification
  • Federal learning global model training method based on differential privacy and quantification
  • Federal learning global model training method based on differential privacy and quantification

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0033] Usually, federated learning uses private data distributed locally to perform distributed training to obtain a machine learning model with good predictive ability. Specifically, the central server obtains the global model gradient for updating the federated learning global model by aggregating local model gradients obtained by local users through local training. Then, the central server uses the global model gradient and the global model learning rate to update the federated learning global model. The federated learning global model update process is performed iteratively until a certain training termination condition is met.

[0034] The present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments.

[0035] refer to figure 1 , to further describe in detail the implementation steps of the present invention.

[0036] Step 1. The central server delivers the pre-trained federated learning global model.

[0037] ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

A federated learning global model training method based on differential privacy and quantification comprises the following steps that a central server issues a pre-trained federated learning global model, a local model gradient of each local user is generated, and noise adding, threshold quantification and compression quantification is sequentially performed on the local model gradient of each local user, and the compressed and quantified local model gradient is uploaded to the central server, the central server performs weighted aggregation on the uploaded local model gradient by the central server, updates a global model, issues the global model to each local user, and training is ended when the privacy budget value of each local user is exhausted or the federal learning global model is converged. According to the method, on the premise that the precision of the federal learning global model is not lost, the privacy of local users is protected, the communication overhead in the transmission process is reduced, and the training efficiency of the federal learning global model is improved.

Description

technical field [0001] The invention belongs to the technical field of data processing, and further relates to a federated learning global model training method based on differential privacy and quantization in the technical field of machine learning model gradient data processing. The present invention can be used to train machine learning models from local data scattered in users, and supports the protection of individual user privacy, reduces the transmission scale of user gradient data, and finally reduces the communication overhead of the transmission process to improve the training efficiency of the federated learning global model Purpose. Background technique [0002] Federated learning allows users to jointly obtain a shared global model without storing data centrally. Specifically, users use local data to train their own local models locally, and upload the trained local model gradient data to the central server, which aggregates and updates the global model. Duri...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06F21/62G06N20/00
CPCG06F21/6245G06N20/00
Inventor 王子龙周伊琳陈谦肖丹王鸿波陈嘉伟刘蕴琪安泽宇
Owner XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products