Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Black-box attack defense system and method based on neural network middle layer regularization

A neural network and defense system technology, applied in biological neural network models, neural learning methods, neural architectures, etc., can solve the problems of difficulty in knowing the training data set, affecting the efficiency of black box attacks, and inability to make progress, and achieve confrontational training. Robust, addressing the effects of poor transfer quality

Active Publication Date: 2022-05-17
UNIV OF ELECTRONICS SCI & TECH OF CHINA
View PDF8 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] Now the more common attack methods are black-box attack and white-box attack. Black-box attack is divided into migration-based training replacement model attack method and decision-based multiple query estimation gradient attack method. The two methods are close to black-box After the model is substituted and estimated to be close to the gradient of the black-box model, the mainstream white-box attack method is used to attack. When training the substitute model, most of the former need to know the training data set of the attacked model, as well as the input and output of the model. There is a lot of information other than internal parameters, and this information, especially the training data set, is difficult to know in practical applications, or the number of acquisitions is limited, so the method of generating alternative models through the above methods is limited in many cases The latter is to query the input and output of the adversarial model multiple times and estimate the gradient. When the number of queries is large enough, the estimated gradient will be close to the real gradient of the adversarial model to obtain the decision boundary. However, the problem with this method is that when multiple queries bring At the same time, it cannot make progress in the black-box model that limits the number of queries, which seriously affects the efficiency of black-box attacks

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Black-box attack defense system and method based on neural network middle layer regularization

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0032] The present invention will be further described below in conjunction with the accompanying drawings. Embodiments of the present invention include, but are not limited to, the following examples.

[0033] A black-box attack defense system based on the regularization of the middle layer of the neural network, including:

[0034] The first source model adopts the ResNet network based on the residual module. In this embodiment, the first source model uses a white-box attack method to attack, and finally outputs the first adversarial sample sequence. Taking the input original picture as an example, input A set of original pictures, using the white box attack method to add appropriate adversarial perturbation to attack the first source model, generate the first adversarial sample sequence, the first adversarial sample sequence also has a certain degree of migration, but for the second source model In other words, because the decision-making direction of the first adversarial...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to the field of artificial intelligence security, in particular to a black-box attack defense system based on the regularization of the middle layer of the neural network, including a first source model, a second source model and a third source model; The box attack defense method includes S1, inputting pictures into the first source model for white-box attack, outputting the first adversarial sample sequence, S2, inputting the first adversarial sample sequence into the second source model, and outputting the second adversarial sample sequence , S3, inputting the second adversarial sample sequence into the third source model for black-box attack, outputting the third recognition sample sequence, S4, inputting the third recognition sample sequence into the third source model for adversarial training, updating the third source model ; The adversarial samples generated by this algorithm have the characteristics of high transferability to the target model, and can also effectively defend the target model from being attacked through confrontation training.

Description

technical field [0001] The invention relates to the field of artificial intelligence security, in particular to a black-box attack defense system and method based on the regularization of the middle layer of a neural network. Background technique [0002] When a small perturbation is added to the image signal, and the perturbed image signal is input to the convolutional neural network for classification tasks, it will be misrecognized by the network. This technology is widely used in vehicle detection systems. Deceiving the vehicle detection system with a small disturbance will help to improve the robustness and robustness of the vehicle detection system; in the face recognition detection system, it will help to deceive the face recognition detection system It is used to test the robustness and security of the face recognition network; in the unmanned driving system, it is helpful to test the robustness of the object classification and target detection network in machine vis...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F21/55G06N3/04G06N3/08
CPCG06F21/55G06N3/082G06N3/045Y02T10/40
Inventor 李晓锐崔炜煜王文一陈建文
Owner UNIV OF ELECTRONICS SCI & TECH OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products