Adversarial defense method based on class activation mapping

A technology of activation value and mapping graph, which is applied in image data processing, image enhancement, instruments, etc., can solve the problems of increasing calculation cost of methods, affecting defense efficiency, lack of diversity, etc., to achieve low data processing cost, improve defense efficiency, High versatility effect

Pending Publication Date: 2020-10-09
ZHEJIANG UNIV OF TECH
View PDF0 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

All these methods have some disadvantages: they cannot find a general defense methodology, cannot resist different adversarial attacks, lack diversity
In addition, they introdu

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Adversarial defense method based on class activation mapping
  • Adversarial defense method based on class activation mapping
  • Adversarial defense method based on class activation mapping

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0027] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, and do not limit the protection scope of the present invention.

[0028] S1. Establish a comparison image set that maximizes the neuron activation value in the prediction model by using the gradient ascent method.

[0029] S11. Select an image set, and randomly select an image x with a label l from the image set;

[0030] S12. Input the image x into the prediction model, and calculate the activation value a of the jth neuron of the image x in the i-th layer of the prediction model i,j and the activation gradient And carry out pixel iterative update to image x according to formula (1);

[0031]

[0032] In the formula, x' is the ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an adversarial defense method based on class activation mapping. The adversarial defense method comprises the following steps: S1, establishing a comparison image set for maximizing a neuron activation value in a prediction model through a gradient rising method; S2, positioning a judgment region based on the class activation mapping graph of the to-be-detected image; S3, calculating the inconsistency between the to-be-detected image judgment area and the same label comparison image judgment area based on a binarization algorithm; s4, judging whether the to-be-detectedimage has disturbance or not, if the inconsistency degree is greater than a threshold value, determining that the to-be-detected image has countermeasure disturbance, and otherwise, determining that the to-be-detected image is a normal image; and S5, removing the countermeasure disturbance in the to-be-detected image. The adversarial defense method is high in universality, can resist different confrontation attacks, is low in data processing cost, and improves the defense efficiency.

Description

technical field [0001] The invention belongs to the field of confrontation defense, and in particular relates to a confrontation defense method based on class activation mapping. Background technique [0002] In recent years, deep learning has made major breakthroughs in machine learning fields such as computer vision, speech recognition, and reinforcement learning, and has extremely high performance in visual fields such as video recognition, image classification, and video capture. But along with these successes, deep neural networks were found to be vulnerable to adversarial perturbations (maliciously crafted and added to mountains of input data), known as adversarial attacks. The vulnerability of deep neural networks to adversarial attacks has raised widespread concerns. According to the research results, even a small, indistinguishable perturbation in human visual perception can easily cause the model to make catastrophic mispredictions. For example, on autonomous dri...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T7/11G06K9/62G06N3/04G06T7/194G06T7/90
CPCG06T7/11G06T7/194G06T7/90G06T2207/20056G06T2207/20032G06N3/045G06F18/22
Inventor 陈晋音上官文昌沈诗婧
Owner ZHEJIANG UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products