Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

End-edge collaborative federation learning optimization method based on edge calculation

An edge computing and optimization method technology, applied in neural learning methods, computing, machine learning, etc., can solve the problems of high computing overhead, too many updates and training times, and high global aggregation communication delay, so as to reduce communication delay and speed up The effect of training speed and reducing the number of global aggregations

Active Publication Date: 2021-03-26
SUN YAT SEN UNIV
View PDF7 Cites 16 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] In order to solve the problem of large computing overhead and high global aggregation communication delay caused by too many local update training times of mobile terminals in the federated learning process, the present invention provides an edge computing-based end-edge collaborative federated learning optimization method

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • End-edge collaborative federation learning optimization method based on edge calculation
  • End-edge collaborative federation learning optimization method based on edge calculation
  • End-edge collaborative federation learning optimization method based on edge calculation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0033] like Figure 1 to Figure 2 As shown, an end-edge collaborative federated learning optimization method based on edge computing includes the following steps: S1: Construct a federated learning optimization system architecture, which includes a cloud data center, an edge server, and a mobile terminal that are sequentially connected by communication; S2 : The mobile terminal and the edge server cooperate to update the global network model of the cloud data center locally; S3: After the local update is completed, the edge server gets the updated model parameters and sends them to the cloud data center for global aggregation to obtain a new global network model ; S4: The cloud data center conducts a model accuracy test on the new global network model.

[0034] In the above solution, in the architecture of the federated learning optimization system, the traditional two-layer architecture of device-cloud is changed to a three-layer architecture of device-edge-cloud, which mainl...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an edge computing-based end-edge collaborative federated learning optimization method, and the method comprises the following steps: constructing a federated learning optimization system architecture which comprises a cloud data center, an edge server and a mobile terminal; the mobile terminal and the edge server cooperatively perform local updating of the model; after local updating is completed, the edge server sends the updated model parameters to the cloud data center for global aggregation, and a new model is obtained; and the cloud data center performs model precision testing on the new model. According to the invention, the federated learning optimization system architecture is established, and the edge server and the mobile terminal are added for cooperativelocal updating, so that the computing overhead of the mobile terminal is greatly reduced; and after the collaborative updating is finished, the operation results of the plurality of mobile terminalsare aggregated on the edge server, so that the required global aggregation frequency is reduced, the communication delay is greatly reduced, and the model training speed of federated learning is increased.

Description

technical field [0001] The invention relates to the field of edge computing and federated learning, and more specifically relates to an edge computing-based end-edge collaborative federated learning optimization method. Background technique [0002] With the advent of the Internet of Everything era, the amount of data in the network edge environment has increased rapidly, and the core network cannot withstand a large amount of data transmission, and some existing delay-sensitive applications cannot withstand the large amount of data transmission to the cloud data center for processing. huge delay. Therefore, edge computing emerges as the times require, by using edge nodes in the network edge environment for efficient data processing. Gartner estimates that by 2025, 75% of data will be processed outside the traditional cloud data center, and the collaborative data processing of mobile terminals and edge nodes will improve the data digestion capability in the network edge env...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): H04L29/08G06N3/04G06N3/08G06N20/00
CPCH04L67/10G06N3/08G06N20/00G06N3/045Y02D10/00
Inventor 刘芳黎燊肖侬金航
Owner SUN YAT SEN UNIV
Features
  • Generate Ideas
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More