Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Adversarial attack detection method

An attack detection and anti-sample technology, applied in neural learning methods, instruments, biological neural network models, etc., can solve problems such as poor performance, differences in resistance effects, and failure to take into account, so as to reduce uncertainty, stabilize results, and The effect of solving the sparsity problem

Active Publication Date: 2021-11-09
NANKAI UNIV
View PDF5 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The above methods have achieved good results, but most studies have not considered that different attack methods and different original sample inputs may have different sensitive layers in the target deep neural network, and that each hidden layer contributes differently to the discovery of different adversarial samples, so The above research methods have different resistance effects in the environment of different adversarial sample attacks, and they do not perform well when multiple attack methods are mixed.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Adversarial attack detection method
  • Adversarial attack detection method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0029] In order to enable those skilled in the art to better understand the solution of the present invention, the technical solution of the present invention will be further described below in conjunction with specific examples.

[0030] See attached figure 1 , an adversarial attack detection method, comprising the following steps:

[0031] Step S1, preprocessing the input data set of the target deep neural network to be attacked to obtain input samples.

[0032] Wherein, the preprocessing of the input data set includes the following steps:

[0033] Step S11, divide the input data set into a training set and a test set, use the training set to train the target system to predict the test set samples, remove the wrongly predicted samples, and record the rest as natural input samples.

[0034] In this embodiment, the target system to be attacked by the adversarial example is the ResNet-18 model, and ResNet-18 is composed of r 1 convolutional layers, r 2 average pooling layer...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an adversarial attack detection method, which comprises the following steps of S1, generating an adversarial sample for a target deep neural network through multiple adversarial attack algorithms, and mixing the adversarial sample with a natural input sample to serve as an input sample; S2, inputting the input sample into a target deep neural network to extract global features and hidden layer features; S3, performing feature fusion on the global feature and the hidden layer feature of the input sample to obtain a final feature representation of the input sample; S4, training a classifier by using the final feature representation of the input sample to obtain an adversarial sample detection model; and S5, detecting whether input data contains an adversarial sample or not by using the adversarial sample detection model obtained in the step 4. According to the method, different weights can be dynamically allocated to different hidden layers of an attacked target system, the adversarial samples in a single attack mode can be found, and the adversarial samples generated by each attack method can be detected without being influenced by a mixed attack mode.

Description

technical field [0001] The invention belongs to the fields of anti-attack, artificial intelligence technology application, and artificial intelligence system security, and specifically relates to an anti-attack detection method. Background technique [0002] As a technology to realize machine learning, deep learning has powerful feature extraction and representation capabilities, data fitting capabilities, and complex problem-solving capabilities. It is widely used in image classification, speech recognition, target detection, machine translation, recommendation systems, etc. Various fields have brought great convenience to people's lives. However, some security problems in deep learning itself limit its application in safety-critical tasks, especially deep learning models are extremely vulnerable to adversarial samples. Deliberately adding small perturbations that cannot be recognized by the human eye in ordinary samples may lead to changes in the decision-making mechanism...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06N3/047G06N3/045G06F18/2415G06F18/253G06F18/214Y02T10/40
Inventor 徐思涵麦隽韵王志煜李君龙李梅蔡祥睿
Owner NANKAI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products