Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Anti-attack method based on training set data

A training set and data technology, applied in the fields of instruments, character and pattern recognition, computer components, etc., can solve the problem of unconvincing linear assumptions, and achieve the effect of small disturbance

Active Publication Date: 2020-08-04
TIANJIN UNIV
View PDF2 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In Tanay and Griffin's view, the linear assumption about the existence of adversarial examples is not convincing

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Anti-attack method based on training set data
  • Anti-attack method based on training set data
  • Anti-attack method based on training set data

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0039] The present invention will be further described below in conjunction with the accompanying drawings and embodiments, but not as a basis for limiting the present invention.

[0040] Such as figure 1 As shown, it is an overall flowchart of an adversarial attack method based on training set data in the present invention, and YOLO-v3 is selected as the target model, which specifically includes the following steps:

[0041] Step 1, use the image collection Img composed of the VOC large-scale image classification dataset to train the detection model:

[0042]

[0043] where x i represents an image, N d Indicates the total number of images in the image collection Img;

[0044] Build a collection IMG of image sets (consisting of Img), where each image x k Corresponding detection frame T k :

[0045] T k ={(q 11 ,p 11 ,q 21 ,p 21 , l 1 )...(q 1i ,p 1i ,q 2i ,p 2i , l i )}, i=1,2,...,R n

[0046] Among them, (q 11 ,p 11 ,q 21 ,p 21 ) represents the coord...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an anti-attack method based on training set data. The method comprises the following steps: step 1, training a detection model by using a VOC2007 image data set; step 2, screening the image in the training set to search a single-class training set image Y; step 3, using the image set Y to construct a KD-tree for storage; step 4, for a to-be-attacked picture, querying a non-same class training image closest to the to-be-attacked picture in Y through a KD-tree; step 5, constructing initial radial noise z *; step 6, constructing a disturbance space, and randomly sampling the disturbance space to obtain eta; step 7, adjusting the disturbance quantity in an image detection frame, and generating a new adversarial sample x' according to eta; step 8, querying the new adversarial sample x '; and step 9, repeating the steps 5, 6, 7 and 8 until the attack succeeds to obtain a final adversarial sample x ', and inputting the adversarial sample into the target model for classification to obtain a classification result F (x'). The attack effect can be achieved at the fastest speed, and the generated disturbance is very small.

Description

technical field [0001] The invention relates to the technical field of machine learning security, in particular to a method for gray-box confrontation decision-making attacks oriented to a deep image recognition system. Background technique [0002] Many deep learning models without defense measures are considered to be vulnerable to adversarial attacks. Adding small perturbations to the original image can maliciously mislead the model and make the model misclassify. Researchers have done a lot of research on designing different adversarial attack methods to fool state-of-the-art deep convolutional networks. Attacks can be roughly divided into three categories: ① Gradient-based iterative attacks, such as FGSM, I-FGSM, VR-IGSM and a series of FGSM variants; ② Optimization-based iterative attacks, such as C&W (Carlini&Wagner); ③ Decision boundary-based Attacks, such as boundary attack. [0003] Tanay and Griffin provide a boundary-slanting perspective on the existence of adv...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62
CPCG06F18/24G06F18/214
Inventor 韩亚洪安健侨石育澄贾凡
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products