An adversarial sample detection method based on the distance from a sample to a decision boundary

A technology against samples and detection methods, applied to instruments, character and pattern recognition, computer components, etc., can solve the problems of artificial intelligence classifier security vulnerabilities and unsatisfactory results, and achieve improved security and good detection results , the effect is obvious

Inactive Publication Date: 2019-01-08
SHANGHAI JIAO TONG UNIV
View PDF0 Cites 14 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, research has found that artificial intelligence classifiers have relatively serious security vulnerabilities. Malicious attackers can make small perturbations to normal recognition samples to make them adversarial samples. Adversarial samples can make classifiers identify mistakes. To a certain extent, it resists adversarial sample attacks, but the effect is always unsatisfactory. Therefore, many researchers hope to detect adversarial samples through some inherent characteristics of adversarial samples, so as to resist adversarial attacks.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • An adversarial sample detection method based on the distance from a sample to a decision boundary
  • An adversarial sample detection method based on the distance from a sample to a decision boundary
  • An adversarial sample detection method based on the distance from a sample to a decision boundary

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0021] Such as figure 1 As shown, this embodiment adopts the BelgiumTS data set, and protects the road sign recognition API from adversarial sample attacks through the method of this embodiment.

[0022] This embodiment specifically includes:

[0023] Step 1. Generation of adversarial samples: Use the API training sample set as normal samples, and use some of the normal samples to generate adversarial samples (mixed in equal proportions) through four attack methods: iter-FGSM, C&W, DeepFool, and JSMA.

[0024] Step 2. Elimination of invalid samples: Eliminate invalid samples for normal samples and adversarial samples: invalid samples include: ① they are normal samples, but are misidentified by the API, and these samples are close to the decision boundary, so they are eliminated; ② they are adversarial samples , but is correctly identified by the API, this type of adversarial sample attack fails and cannot threaten the API.

[0025] Step 3. Calculate the upper and lower bound...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

An adversarial sample detection method based on the distance from a sample to a decision boundary. Adversarial samples are generated on the basis of common samples, feature extraction is performed onall samples, that is to say, a distance estimation value from each sample to a decision boundary is calculated, the distance estimation values are taken as features of the samples to train a classifier, and the trained classifier is the detector, which is used to detect the adversarial samples. The method can be widely applied to machine learning models based on classifiers, such as speech recognition, image classification and the like, to improve the detection rate of antagonistic samples. The method can be used in an artificial intelligence API, input samples can be filtered, and the security of artificial intelligence has been significantly improved.

Description

technical field [0001] The present invention relates to a technology in the field of artificial intelligence confrontation, in particular to a method for detecting confrontation samples based on the distance from samples to decision boundaries. Background technique [0002] Artificial intelligence has developed rapidly in recent years and has been applied in more and more fields. However, research has found that artificial intelligence classifiers have relatively serious security vulnerabilities. Malicious attackers can make small disturbances to normal recognition samples to make them adversarial samples. Adversarial samples can make classifiers identify mistakes. To a certain extent, it resists adversarial sample attacks, but the effect is always unsatisfactory. Therefore, many researchers hope to detect adversarial samples through some inherent characteristics of adversarial samples, so as to resist adversarial attacks. Contents of the invention [0003] Aiming at the ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62
CPCG06V10/757G06F18/24
Inventor 易平胡嘉尚张浩倪洁何芷珊胡又佳
Owner SHANGHAI JIAO TONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products