Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Black box aggressive defense system and method based on neural network interlayer regularization

A neural network and defense system technology, applied in biological neural network models, neural learning methods, neural architectures, etc., can solve problems such as difficult access to training data sets, inability to make progress, and affect black-box attack efficiency, etc., to achieve migration Effects of poor quality, enhanced transferability, and robustness to adversarial training

Active Publication Date: 2021-03-09
UNIV OF ELECTRONICS SCI & TECH OF CHINA
View PDF8 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] Now the more common attack methods are black-box attack and white-box attack. Black-box attack is divided into migration-based training replacement model attack method and decision-based multiple query estimation gradient attack method. The two methods are close to black-box After the model is substituted and estimated to be close to the gradient of the black-box model, the mainstream white-box attack method is used to attack. When training the substitute model, most of the former need to know the training data set of the attacked model, as well as the input and output of the model. There is a lot of information other than internal parameters, and this information, especially the training data set, is difficult to know in practical applications, or the number of acquisitions is limited, so the method of generating alternative models through the above methods is limited in many cases The latter is to query the input and output of the adversarial model multiple times and estimate the gradient. When the number of queries is large enough, the estimated gradient will be close to the real gradient of the adversarial model to obtain the decision boundary. However, the problem with this method is that when multiple queries bring At the same time, it cannot make progress in the black-box model that limits the number of queries, which seriously affects the efficiency of black-box attacks

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Black box aggressive defense system and method based on neural network interlayer regularization

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0032] The present invention will be further described below in conjunction with the accompanying drawings. Embodiments of the present invention include, but are not limited to, the following examples.

[0033] A black-box attack defense system based on the regularization of the middle layer of the neural network, including:

[0034] The first source model adopts the ResNet network based on the residual module. In this embodiment, the first source model uses a white-box attack method to attack, and finally outputs the first adversarial sample sequence. Taking the input original picture as an example, input A set of original pictures, using the white box attack method to add appropriate adversarial perturbation to attack the first source model, generate the first adversarial sample sequence, the first adversarial sample sequence also has a certain degree of migration, but for the second source model In other words, because the decision-making direction of the first adversarial...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to the field of artificial intelligence security, in particular to a black box aggressive defense system based on neural network interlayer regularization, which comprises a first source model, a second source model and a third source model, a black box aggressive defense method based on neural network interlayer regularization comprises the steps of S1, inputting a picture into a first source model for white box attack and outputting a first adversarial sample sequence, S2, inputting the first adversarial sample sequence into a second source model, outputting a second adversarial sample sequence, and S3, outputting a second adversarial sample sequence. and S3, inputting the second adversarial sample sequence into a third source model for black box attack, and outputting a third identification sample sequence, S4, inputting the third identification sample sequence into the third source model for adversarial training, and updating the third source model. An adversarial sample generated by using the algorithm has the characteristic of high mobility to a target model, and the target model can also be effectively defended from being attacked through adversarial training.

Description

technical field [0001] The invention relates to the field of artificial intelligence security, in particular to a black-box attack defense system and method based on the regularization of the middle layer of a neural network. Background technique [0002] When a small perturbation is added to the image signal, and the perturbed image signal is input to the convolutional neural network for classification tasks, it will be misrecognized by the network. This technology is widely used in vehicle detection systems. Deceiving the vehicle detection system with a small disturbance will help to improve the robustness and robustness of the vehicle detection system; in the face recognition detection system, it will help to deceive the face recognition detection system It is used to test the robustness and security of the face recognition network; in the unmanned driving system, it is helpful to test the robustness of the object classification and target detection network in machine vis...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F21/55G06N3/04G06N3/08
CPCG06F21/55G06N3/082G06N3/045Y02T10/40
Inventor 李晓锐崔炜煜王文一陈建文
Owner UNIV OF ELECTRONICS SCI & TECH OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products