Image classifier adversarial attack defense method based on disturbance evolution

An image classifier and adversarial technology, applied in the direction of instruments, genetic models, genetic rules, etc., can solve problems such as inability to attack defense, inability to directly optimize or compare, and achieve the effect of improving effect, increasing diversity, and increasing quality

Active Publication Date: 2018-10-02
ZHEJIANG UNIV OF TECH
View PDF2 Cites 50 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] There are already many attack models that can attack image classification models. If people only use this type of attack for training, they cannot defend against unknown attacks.
Although different adversarial attack methods have different structures and cannot be directly optimized or compared, they all output adversarial samples and perturbations

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image classifier adversarial attack defense method based on disturbance evolution
  • Image classifier adversarial attack defense method based on disturbance evolution
  • Image classifier adversarial attack defense method based on disturbance evolution

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0053] In order to make the objectives, technical solutions, and advantages of the present invention clearer, the following further describes the present invention in detail with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are only used to explain the present invention, and do not limit the protection scope of the present invention.

[0054] This embodiment uses various types of pictures in the ImageNet data set for experiments. Such as Figure 1~3 As shown, the defense method for adversarial attacks on image classifiers based on perturbation evolution provided in this embodiment is divided into three stages, namely, the best confrontation sample generation stage, the confrontation sample detector obtaining stage, and the detection image classification stage. The specific process of each stage is as follows:

[0055] The best adversarial example generation stage

[0056] S101: Input the normal pictu...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an image classifier adversarial attack defense method based on disturbance evolution. The method comprises the following steps that 1) the sample is attacked by different attack models so as to obtain different types of disturbance; 2) a block box model is attached by the adversarial sample corresponding to the disturbance and the attack effect is ordered; 3) cloning, crossing, variation and other operation are performed on the disturbance of great attack effect so as to obtain the new disturbance; 4) the disturbance is updated by using the parent-child hybrid selectionmode so as to achieve the objective of disturbance evolution; 5) an adversarial sample detector is trained by the adversarial sample corresponding to the evolved disturbance and the normal sample; and 6) when the detection sample is detected, detection is performed by using the adversarial sample detector firstly and then the normal sample is inputted to the black box model and the class is returned so as to achieve the adversarial attack defense effect.

Description

Technical field [0001] The invention belongs to the technical field of deep learning security, and specifically relates to a defense method for antagonistic attacks on image classifiers based on perturbation evolution. Background technique [0002] Deep learning is inspired by neuroscience. It can learn from a large amount of data to obtain more accurate classification results than general algorithms. It has powerful feature learning capabilities and feature expression capabilities. As deep learning is widely used in various fields such as VISION, speech recognition, language processing, financial fraud detection, and malware detection, the security issues of deep learning have gradually attracted attention. [0003] Although deep learning has a high classification effect in the field of computer vision, Szegedy et al. found that deep models are vulnerable to subtle disturbances. These small disturbances are almost imperceptible to the human visual system, but they can make the de...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62G06N3/12
CPCG06N3/126G06F18/241G06F18/214
Inventor 陈晋音苏蒙蒙徐轩珩郑海斌林翔熊晖沈诗婧施朝霞
Owner ZHEJIANG UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products