Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Federal learning member reasoning attack defense method based on adversarial interference

An adversarial and member-based technology, applied in machine learning, electrical digital data processing, biological neural network models, etc., can solve problems such as member reasoning attacks, leakage of user local data privacy, etc., and achieve the effect of reducing the loss of accuracy

Pending Publication Date: 2021-12-14
BEIJING INSTITUTE OF TECHNOLOGYGY
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0008] The purpose of the present invention is to solve the technical problem that existing federated learning is vulnerable to member reasoning attacks, thereby leaking the privacy of users’ local data. Learning member reasoning attack defense method

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Federal learning member reasoning attack defense method based on adversarial interference
  • Federal learning member reasoning attack defense method based on adversarial interference
  • Federal learning member reasoning attack defense method based on adversarial interference

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0059] In this embodiment, a federated learning member reasoning attack defense model based on an adversarial interference-based federated learning member reasoning attack defense method is established, as shown in figure 1 shown.

[0060] figure 1 The following federated learning membership inference attack defense scenarios are described.

[0061] There are 100 participants and 1 server in this scenario; the participants train the local model and upload parameters, and the server aggregates the parameters and updates the global model as one round of the federated learning training process. A total of 100 federated learnings are performed in this scenario Rounds; each participant iterates 5 rounds each time the local model is trained; set the federated learning parameters batchsize=128, learning rate=0.1, momentum=0.9, milestones=[60,90], where batch size is batch The processing size refers to the amount of data in one training session when the participants train the local ...

Embodiment 2

[0078] In this embodiment, the results of the method of the present invention are compared in various scenarios, and it is verified that the defense method of the present invention is applicable to various data sets and target model structures. For the Purchase100 dataset, use a fully connected neural network as the target model; for the CIFAR10 dataset (http: / / www.cs.toronto.edu / ~kriz / cifar.html), use the AlexNet model for image classification as the target model ; For the CIFAR100 dataset (http: / / www.cs.toronto.edu / ~kriz / cifar.html), the AlexNet model and the DenseNet12 model are respectively used as the target model, and DenseNet12 corresponds to DenseNet-BC (depth L=100, each The number of feature maps output by each network layer k=12); when using different data sets and target model structures for undefended (skip the noise optimization part of step 2 and step 3) and defensive federated learning model training, due to The limit of the Cifar100 data set needs to set the n...

Embodiment 3

[0084] This embodiment compares the method described in the present invention with various federal learning member reasoning attack defense methods, and verifies that the defense method of the present invention has a better member reasoning attack defense effect than other defense methods and can maintain a lower performance loss.

[0085] Use Cifar10 as data set and AlexNet model as target model; The target model that does not use defense method training to obtain has 79% attack accuracy rate, 95% training accuracy rate, 0.1% training loss; Defense method proposed by the present invention has 53% attack accuracy, 95% training accuracy, 0.1% training loss.

[0086]The first defense method compared (https: / / dl.acm.org / doi / abs / 10.1145 / 3243734.3243855) is a federated learning member reasoning attack defense method based on adversarial regularization Ad-reg; set Ad-reg Adversarial regularization factor λ=2, Ad-reg has 59% attack accuracy, 99% training accuracy and 0.5% training l...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a federated learning member reasoning attack defense method based on adversarial interference, and belongs to the technical field of federated learning privacy protection in machine learning. According to the method, a federal learning member reasoning attack defense mechanism is established, and before each participant uploads model parameters trained by using local data, meticulously designed antagonistic interference is added into the model parameters, the attack accuracy rate obtained after an attacker carries out member reasoning attack on the model trained by using the defense mechanism is as close as 50% as possible, and the influence on the performance of the target model is reduced as far as possible, so that the requirements of user data privacy protection and cooperative training of a high-performance model in a federated learning scene are met at the same time.

Description

technical field [0001] The present invention relates to a defense method for federated learning member reasoning attacks based on adversarial interference, which aims to defend against member reasoning attacks received by participants in collaborative training of machine learning models in federated learning scenarios, and realize privacy protection of participants' local data. The invention belongs to the technical field of federated learning privacy protection in machine learning. Background technique [0002] Machine learning mainly studies how to use computers to simulate or realize human activities, and is one of the research hotspots in the field of artificial intelligence. After decades of development, machine learning has been widely used in data mining, computer vision, natural language processing, medical diagnosis and other fields, with excellent performance. [0003] In recent years, the rapid development of information technology has promoted the exponential gr...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F21/62G06N3/04G06N20/00
CPCG06F21/6245G06N20/00G06N3/045
Inventor 沈蒙魏雅倩王焕祝烈煌
Owner BEIJING INSTITUTE OF TECHNOLOGYGY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products