Method for enhancing defense capability of neural network based on federated learning

A neural network and federation technology, applied in the field of enhanced neural network defense capabilities based on federated learning, can solve problems such as artificial intelligence data crisis, privacy leakage, and neural network model data sharing

Pending Publication Date: 2020-10-30
GUANGZHOU UNIVERSITY +1
View PDF0 Cites 24 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Therefore, the traditional data collection and use methods are no longer applicable, and big data-driven artificial intelligence is facing a data crisis. The method of simply expanding the data set and retraining the neural network model faces security risks such as data sharing and privacy leakage.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for enhancing defense capability of neural network based on federated learning
  • Method for enhancing defense capability of neural network based on federated learning
  • Method for enhancing defense capability of neural network based on federated learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0031] Embodiment 1: refer to Figure 1-5 Summary of the invention: the present invention provides a kind of method based on federated learning to strengthen neural network defense capability, comprises the following steps:

[0032] Step 1: Use federated learning to save the trouble of data collection. Keeping data locally can prevent data privacy from leaking out. Collaborate with all parties to conduct distributed model training, encrypt intermediate results to protect data security, and finally aggregate and fuse The multi-party model gets a better federated model, which increases the richness of the training dataset and reduces the effectiveness of adversarial examples.

[0033] The following are the specific implementation steps:

[0034] 1) Select a trusted server as a trusted third party, and the terminals participating in the model training (participants, such as enterprises, universities, scientific research institutes, individual users, etc.) download the shared ini...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a method for enhancing the defense capability of a neural network based on federated learning. The method comprises the following steps: S1, retaining data locally and preventing data privacy leakage by utilizing federated learning, cooperating with each party to carry out distributed model training, encrypting an intermediate result to protect data security, summarizing andfusing a multi-party model to obtain a federated model; and S2, establishing an adversarial sample, and quickly searching the adversarial sample by adopting an algorithm. According to the method, federated learning and a training process of a neural network model are combined, the problem that a data set cannot be circulated due to consideration of privacy protection and laws and regulations is solved, the trouble of data collection is saved, the training set of the neural network model is richer and more independent, and the defect that the neural network model is likely to be attacked by countermeasure samples due to the fact that the training set is incomplete is overcome.

Description

technical field [0001] The invention relates to the technical field of artificial intelligence security, in particular to a method for enhancing neural network defense capabilities based on federated learning. Background technique [0002] Adversarial Examples (Adversarial Examples) refer to the addition of subtle disturbances that are difficult to distinguish artificially in the input samples, causing machine learning models (such as neural networks) to give a wrong output with high confidence. The existence of adversarial examples proves that existing machine learning models still have security problems, thus limiting the application and development of artificial intelligence (AI) in areas with high safety requirements (such as autonomous driving). The paper "Intriguing Properties of Neural Networks" (Christian Szegedy, Wojciech Zaremba, et al. Intriguing Properties of Neural Networks. In ICLR, 2014.) proposed the concept of adversarial samples and proved that adversarial ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/08G06N20/10G06F21/60G06F21/62H04L29/06
CPCG06F21/602G06F21/6245G06N3/08G06N20/10H04L63/0428
Inventor 顾钊铨李鉴明仇晶王乐唐可可韩伟红贾焰方滨兴
Owner GUANGZHOU UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products