Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Step Size Adaptive Adversarial Attack Method Based on Model Extraction

An adaptive, step-length technology, applied in biological neural network models, character and pattern recognition, instruments, etc., to achieve good attack effects and strong non-black box attack capabilities.

Active Publication Date: 2022-03-15
TIANJIN UNIV
View PDF8 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

At the same time, the existing methods use the gradient information obtained by the model extraction at each step only to calculate the sign of the gradient value

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Step Size Adaptive Adversarial Attack Method Based on Model Extraction
  • A Step Size Adaptive Adversarial Attack Method Based on Model Extraction
  • A Step Size Adaptive Adversarial Attack Method Based on Model Extraction

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0037] Embodiments of the present invention will be further described in detail below in conjunction with the accompanying drawings.

[0038]Here, Inception-v3 is selected as the target model, and the target model is attacked by an adversarial sample construction method that adaptively adjusts the noise step size.

[0039] Step 1. Form the collected pictures and label information into pairs, where the categories are 0~n-1, that is, there are n categories in all images, specifically including the following processing:

[0040] (1-1) Use the ImageNet large-scale image classification dataset to form the image collection IMG:

[0041]

[0042] where x i represents an image, N d Indicates the total number of images in the image collection IMG;

[0043] (1-2) Construct the image description set GroundTruth corresponding to each image in the image set IMG:

[0044]

[0045] Among them, y i Indicates the category number corresponding to each image, N d Indicates the total...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a step-size self-adaptive adversarial attack method based on model extraction, step 1, constructing an image data set; step 2, training a convolutional neural network on the image set IMG, as a target model to be attacked, and step 3, calculating The cross-entropy loss function realizes the model extraction of the convolutional neural network and initializes the gradient value and step size g of the iterative attack 1 ; Step 4, form a new adversarial sample x 1 ; Step 5, recalculate the cross-entropy loss function, and use the new gradient value to update the step size of adding anti-noise in the next step; Step 6, repeat the process of inputting the image-calculating the cross-entropy loss function-calculating the step size-updating the confrontation sample; Step 5 Repeat the operation T‑1 times to get the final iterative attack adversarial sample x′ i , and input the adversarial sample into the target model for classification, and obtain the classification result N(x′ i ). Compared with the prior art, the present invention can achieve better attack effect, and has stronger non-black-box attack capability than the current iterative method.

Description

technical field [0001] The invention relates to the field of machine learning security technology, in particular to a non-black-box anti-iterative attack method oriented to a deep image recognition system. Background technique [0002] In recent years, with the continuous progress and development of machine learning theory and technology, especially breakthroughs in the fields of computer vision and multimedia, technologies such as medical image processing, biological image recognition, and face recognition have been widely used. However, the rapid development of the field of machine learning also brings many security problems. In systems closely related to security and privacy, such as autonomous driving, health systems, and financial systems, the security of machine learning poses a potential threat to people's vital interests and even life. Therefore, how to maintain the security of machine learning systems and how to protect user privacy has become the basis for the dev...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/62G06V10/764G06V10/774G06N3/04
Inventor 韩亚洪石育澄
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products