Method for defense of attack of adversarial examples based on convolutional denoising auto-encoder

An adversarial sample and self-encoding technology, applied in the field of information security, can solve the problems of difficulty in fitting adversarial samples and clean samples at the same time, lack of interpretability, poor efficiency performance, etc., to reduce computational overhead, good interpretability, The effect of improving the classification accuracy

Active Publication Date: 2018-09-14
CHONGQING UNIV
View PDF3 Cites 52 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Compared with the general training process, the addition of adversarial samples means that the training process needs to consume more computing resources and spend more time; and it is difficult to fit both adversarial samples and clean samples during training; in addition, the effect of adversarial training Depending on the representativeness of the adversarial sample set, adding new adversarial samples requires retraining the target model to consolidate the defense, so the efficiency is not good; there is also the problem of lack of good interpretability

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for defense of attack of adversarial examples based on convolutional denoising auto-encoder
  • Method for defense of attack of adversarial examples based on convolutional denoising auto-encoder
  • Method for defense of attack of adversarial examples based on convolutional denoising auto-encoder

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0032] In order to illustrate the operation process of this method more vividly, we use the MNIST data set and the Cleverhans library to explain, but it is worth emphasizing that the present invention is not limited to the MNIST data set, but is generally applicable to any image data used for classification and recognition Set, and the implementation parameters need to be adjusted and modified according to the actual situation.

[0033] The MNIST data set is a handwritten digital data set constructed by Google Lab and the Courant Institute of New York University. The training set contains 60,000 digital images, and the test set contains 10,000 images. It is often used for prototype verification of image recognition algorithms; Cleverhans is An open source software library that provides a reference implementation of standard adversarial example construction, which can be used to develop more robust machine learning models. Cleverhans has a built-in FGSM (Fast Gradient Sign Meth...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The present invention relates to a method for defense of attack of adversarial examples based on a convolutional denoising auto-encoder. Adversarial image examples x* (output tags identified by an image classification device is y*) are constructed by manual addition of adversarial disturbance on clean image samples x without modification (output tags identified by the image classification device is y), the fraud purpose that y* is not equal to y can be achieved, even though the image classification device classifies two images essentially showing the same meaning to two classes by mistake. Thepresent invention designs an integration defense model connected with a target image classifier based on a convolutional denoising auto-encoder (CDAE), namely input samples are subjected to coding and decoding at the internal portion of a well trained CDAE to remove most of adversarial disturbances in the input samples so as to output denoising samples close to original clean samples, and then are transmitted to the target image classifier so as to improve the classification correction of the target classifier and have an effect for defense of attack of adversarial examples.

Description

technical field [0001] The invention belongs to the technical field of information security, and relates to a method for defending against an adversarial sample attack based on a convolution denoising autoencoder. Background technique [0002] As machine learning technology is widely used in various fields, including identity verification, automatic driving, speech recognition and other fields, its security has also attracted everyone's attention. Nguyen et al. found in 2014 that deep neural networks are easily fooled by adversarial examples. In 2015, Goodfellow et al. showed that any machine learning classifier can be fooled by adversarial examples, not limited to deep learning networks. The attacker slightly modifies the input data source so that the user cannot perceive it, and realizes that the machine learning system accepts the data and makes wrong follow-up operations, that is, in the unmodified clean sample x (image classifier recognition output The adversarial ima...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62G06K9/46
CPCG06V10/443G06F18/24G06F18/214
Inventor 贾云健李独运李勇明
Owner CHONGQING UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products