Countermeasure defense method for countermeasure attacks based on artificial immune algorithm

An artificial immune algorithm and anti-sample technology, which is applied in the field of anti-attack based on artificial immune algorithm, can solve the problems of poor defense effect and low recognition accuracy of classifiers, and achieve the effect of strong diversity and strong defense effect

Active Publication Date: 2020-08-28
ZHEJIANG UNIV OF TECH
View PDF10 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The present invention provides an adversarial defense method for adversarial attacks based on artificial immune algorithms to solve the technical problems of poor defense effect and low classifier recognition accuracy in the prior art when facing adversarial attacks based on artificial immune algorithms

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Countermeasure defense method for countermeasure attacks based on artificial immune algorithm
  • Countermeasure defense method for countermeasure attacks based on artificial immune algorithm
  • Countermeasure defense method for countermeasure attacks based on artificial immune algorithm

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0052] The present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It should be noted that the following embodiments are intended to facilitate the understanding of the present invention, but do not limit it in any way.

[0053] Such as figure 1 with figure 2 As shown, this embodiment provides a method for adversarial defense based on artificial immune algorithm-based adversarial attacks, including the following steps:

[0054] 1) Establish a data set. The data set consists of two parts: training set and test set. The specific process is as follows:

[0055] 1.1) Select the cifar10 data set as the normal sample data set, which has two parts, the training set and the test set, wherein the training set contains 50,000 pictures, and the test set contains 10,000 pictures.

[0056] 1.2) Initialize the population. Randomly add disturbance blocks to normal image samples to form N (N=25~50) different confrontation sa...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a confrontation defense method for confrontation attack based on an artificial immune algorithm. The method comprises the following steps: (1) obtaining an original picture sample set; based on the original picture sample, generating an adversarial sample based on an artificial immune algorithm; merging the original picture sample and the adversarial sample, and dividing into a training set and a test set; (2) training a picture classifier by using the training set and the test set to obtain a picture recognition model; and (3) identifying the to-be-identified picture by utilizing the picture identification model to realize confrontation and defense of picture identification. According to the invention, the technical problems of poor defense effect and low classifier identification accuracy in the prior art when attacks are resisted based on an artificial immune algorithm can be solved.

Description

technical field [0001] The invention relates to the field of confrontation defense, in particular to a confrontation defense method for confrontation attacks based on artificial immune algorithms. Background technique [0002] In recent years, deep neural networks have made great breakthroughs in many machine learning fields, such as image classification, object recognition, object detection, speech recognition, language translation, speech synthesis and other fields. However, despite the great success of deep neural networks in these fields, a large number of recent studies have proved that even the best deep neural networks can still be broken by adversarial examples. The discovery poses a serious threat to some critical security applications, such as autonomous vehicles, biometrics and surveillance systems. You can simply imagine that in an automatic car system, a picture that should have been recognized as a stop at a red light was attacked by an adversarial example and...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/00G06N3/04G06K9/62
CPCG06N3/006G06N3/045G06F18/22G06F18/24G06F18/214Y02T10/40
Inventor 陈晋音上官文昌沈诗婧
Owner ZHEJIANG UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products