Model training method based on decentralized federated learning
A decentralization, model training technology, applied in neural learning methods, ensemble learning, biological neural network models, etc., can solve the problems of noise limitations, inability to execute concurrently, low efficiency, etc., to improve stability and high protection , to prevent the effect of intercepting data results
- Summary
- Abstract
- Description
- Claims
- Application Information
AI Technical Summary
Problems solved by technology
Method used
Image
Examples
Embodiment 1
[0046] This embodiment provides a specific reference for a model training method based on decentralized federated learning figure 1 ,like figure 1 As shown, the method includes steps S1-S4:
[0047] Step S1: constructing a federated learning network, the federated learning network includes a plurality of nodes and a broadcast bus, wherein the nodes are all connected to the broadcast bus, and the nodes communicate through the broadcast bus;
[0048] Step S2: dynamically selecting one of the nodes as the master node, and the other nodes as slave nodes relative to the master node, and the master node transmits the first model data to each of the slave nodes;
[0049] Step S3: Each of the slave nodes performs training based on the first model data and the local data set to obtain second model data, adds noise data to the second model data to obtain a third data model, and transmits the third data model. model data to the master node;
[0050] Step S4: the master node receives a...
Embodiment 2
[0081] This embodiment provides a model training device based on decentralized federated learning, which is used to implement the model training method based on decentralized federated learning in the first embodiment, such as Figure 4 As shown, the device includes the following modules:
[0082] a network building module for constructing a federated learning network, the federated learning network includes a plurality of nodes and a broadcast bus, wherein the nodes are all connected to the broadcast bus, and the nodes communicate through the broadcast bus;
[0083] a model distribution module, configured to dynamically select one of the nodes as a master node, and the other nodes as slave nodes relative to the master node, and the master node transmits the first model data to each of the slave nodes;
[0084] A training upload module is used for each of the slave nodes to perform training based on the first model data and the local data set to obtain second model data, add n...
Embodiment 3
[0087] This embodiment also provides an electronic device, refer to Figure 5 , including a memory 404 and a processor 402, where a computer program is stored in the memory 404, and the processor 402 is configured to run the computer program to execute any one of the above-mentioned embodiment 1 based on the model training method of decentralized federated learning. step.
[0088] Specifically, the above-mentioned processor 402 may include a central processing unit (CPU), or a specific integrated circuit (Application Specific Integrated Circuit, ASIC for short), or may be configured to implement one or more integrated circuits of the embodiments of the present application.
[0089]Among others, memory 404 may include mass storage 404 for data or instructions. By way of example and not limitation, the memory 404 may include a Hard Disk Drive (HDD for short), a floppy disk drive, a Solid State Drive (SSD for short), flash memory, optical disk, magneto-optical disk, magnetic tap...
PUM
Login to View More Abstract
Description
Claims
Application Information
Login to View More 


