Adversarial attack defense method based on adversarial sample training

A technology against samples and samples, applied in the field of artificial intelligence, can solve problems such as unreliable accuracy, and achieve the effect of improving robustness, suppressing overfitting, and enhancing generalization ability.

Inactive Publication Date: 2019-10-15
WUHAN UNIV
View PDF0 Cites 42 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although the recognition accuracy of the deep learning network after adding adversarial training can be improved, there is still a large gap from the original sample rec

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Adversarial attack defense method based on adversarial sample training
  • Adversarial attack defense method based on adversarial sample training
  • Adversarial attack defense method based on adversarial sample training

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0028] In order to facilitate those of ordinary skill in the art to understand and implement the present invention, the present invention will be described in further detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the implementation examples described here are only used to illustrate and explain the present invention, and are not intended to limit this invention.

[0029] please see figure 1 , a method for adversarial attack defense based on adversarial sample training provided by the present invention, comprising the following steps:

[0030] Step 1: For the training set samples, introduce the prior distribution of the category labels of the training samples, and perform smooth correction on the labels; so that the network will not overly believe that the training samples are completely correct during training.

[0031] In order for the network not to fully trust the classification of the training samples, a uniform d...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an adversarial attack defense method based on adversarial sample training for an adversarial attack defense problem of a deep learning network. The adversarial attack defense method includes the steps: firstly, prior distribution of training sample category labels is introduced, and the labels are subjected to smooth correction, so that the network does not excessively believe that the training samples are completely correct during training; secondly, adversarial samples are added in the network training process, meanwhile, an original loss function is modified, the contribution of the adversarial samples to the loss function is reflected, so that the network has resistance to gradient attacks; and finally, random inactivation processing is performed on the networkneurons, so that over-fitting is reduced to a certain extent. According to the adversarial attack defense method, overfitting is inhibited, and the generalization ability of the network is enhanced, so that the robustness of the network for defending against resistance sample attacks is improved.

Description

technical field [0001] The invention belongs to the technical field of artificial intelligence and relates to a defense method for deep learning adversarial sample attacks, in particular to an adversarial attack defense method based on adversarial sample training. Background technique [0002] Deep learning technology has been widely used in image classification and recognition, natural language processing, speech processing and other fields. However, artificial intelligence systems face security risks against sample attacks. Adversarial example attack refers to adding perturbations that are difficult for humans to detect on the input samples, so that the samples that humans will not make mistakes in classification are misidentified and classified by machines. [0003] In actual application scenarios, adversarial sample attacks sometimes cause huge security problems. For example, in face recognition, attackers can use adversarial samples to bypass verification and gain per...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06N3/08
CPCG06N3/084
Inventor 王中元曾诚何政傅佑铭
Owner WUHAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products