Adversarial example defense method based on two-stage adversarial knowledge migration

A knowledge and example technology, applied in the field of adversarial example defense based on two-stage confrontational knowledge transfer, can solve the problem of inability to improve the defense ability of simple DNN against adversarial examples, poor effect, difficult training classification accuracy and robustness, etc. question

Pending Publication Date: 2020-04-21
ZHEJIANG UNIVERSITY OF SCIENCE AND TECHNOLOGY +1
View PDF0 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

At present, various model compression methods have been proposed, such as pruning, parameter quantization, and knowledge extraction, or direct confrontation training on simple DNNs on edge devices, but these existing technologies are not effective.
The reasons are: (1) Simple DNN is

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Adversarial example defense method based on two-stage adversarial knowledge migration
  • Adversarial example defense method based on two-stage adversarial knowledge migration
  • Adversarial example defense method based on two-stage adversarial knowledge migration

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0070] 1. Background knowledge

[0071] 1.1 Adversarial examples and threat models

[0072] Adversarial examples widely exist in data such as images, speech, and text. Taking the image classification system as an example, the image adversarial example is a kind of unnatural image based on the natural image, which can be cleverly designed to deceive the deep neural network through careful design. The present invention gives a formal definition of an adversarial example:

[0073] Adversarial example: Let x be a normal data sample, y true is the correct classification label of x, f(·) is a machine learning classifier, and F(·) is a human perceptual judgment. There is a disturbance δ such that f(x+δ)≠y true , and F(x+δ)=y true , then the present invention calls x′=x+δ an adversarial example.

[0074] The present invention refers to such an attack of deceiving a classifier by using an adversarial example as an adversarial attack. The essence of adversarial attacks is to find...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention belongs to the technical field of artificial intelligence safety, and discloses an adversarial example defense method based on two-stage adversarial knowledge migration. The method comprises the steps: firstly migrating adversarial knowledge from data to a large-scale complex DNN (deep neural network) through heterogeneous multi-source adversarial training, and completing the first-stage adversarial knowledge migration; then, a soft label of an adversarial sample is adopted, an adversarial extraction technology is provided, adversarial knowledge is migrated from a complex DNN toa simple DNN, and adversarial knowledge migration in the second stage is achieved. According to the two-stage adversarial knowledge migration method provided by the invention, the simple neural network on the edge equipment can obtain robustness close to that of a large complex network, and the problem of robustness of the simple network which cannot be solved by pure adversarial training is better solved. The adversarial extraction proposed by the invention has good algorithm convergence, can stably improve the performance and robustness of a simple network model and accelerate convergence, and well solves the problem of instability of the model performance and robustness in integrated adversarial training.

Description

technical field [0001] The invention belongs to the technical field of artificial intelligence security, and in particular relates to an adversarial example defense method based on two-stage adversarial knowledge transfer. Background technique [0002] At present, the closest existing technology: Deep Neural Networks (DNN, Deep Neural Networks) has been widely used in image recognition, natural language processing and other fields recently, but studies have shown that if some well-designed, imperceptible Perturbation of the deep neural network can cause misclassification. Such examples with malicious noise added are called adversarial examples. The emergence of adversarial examples limits the application of deep neural networks in security-sensitive fields, such as autonomous driving and facial payment. Researchers have done a lot of work on defense against adversarial examples, among which adversarial examples are used as training data, and confrontation training of DNN i...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06N20/10G06N3/04G06N3/08
CPCG06N20/10G06N3/08G06N3/045
Inventor 钱亚冠关晓惠周武杰李蔚潘俊云本胜楼琼
Owner ZHEJIANG UNIVERSITY OF SCIENCE AND TECHNOLOGY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products