Unlock instant, AI-driven research and patent intelligence for your innovation.

Method and device for evaluating robustness of neural network image classification model

A classification model and neural network technology, applied in the field of neural networks, can solve the problems of poor specificity, poor versatility, and large disturbance of adversarial samples, and achieve the effects of strong pertinence, reduced scope, and reduced interference.

Pending Publication Date: 2022-03-25
北京墨云科技有限公司
View PDF5 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] To this end, the embodiments of the present invention provide a method and device for evaluating the robustness of a neural network image classification model to solve the problems in the prior art that the adversarial examples used to detect robustness are poorly targeted, have large disturbances, and have poor versatility

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method and device for evaluating robustness of neural network image classification model
  • Method and device for evaluating robustness of neural network image classification model
  • Method and device for evaluating robustness of neural network image classification model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0044] Convolutional Neural Network (CNN), as a typical representative of deep neural network models, has a very wide range of applications in neural network vision models. At the same time, the interpretability and visualization algorithm of the model reveals the correlation between the input of the model and the output of the model, so the interpretability and visualization of the model also have important research value.

[0045] There are two main ways to attack the neural network model through adversarial samples, that is, to generate adversarial samples by adding perturbations to the entire image or adding perturbations to a specific area of ​​the image to construct an adversarial patch. The traditional method of adding perturbation to the entire image indirectly modifies the pixels in the sensitive area of ​​the image, but cannot directly add perturbation to the sensitive area in the image. At the same time, adding disturbance globally may also cause the problem that th...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a method and device for evaluating robustness of a neural network image classification model, and the method comprises the steps: S1, obtaining a sample set which comprises image samples; s2, inputting any first image sample in the sample set into the neural network image classification model to obtain a feature map of the first image sample; s3, inputting a first image sample, and obtaining a thermodynamic diagram of the first image sample based on the neural network image classification model and the feature map through a thermodynamic diagram generation algorithm; s4, calculating a sensitive area of the first image sample based on the thermodynamic diagram, and highlighting the sensitive area of the first image sample; s5, disturbing the sensitive area by adopting a disturbance algorithm to obtain a first adversarial sample; s6, taking the first adversarial sample as a new first image sample, and repeating S3 to S5 to obtain a final first adversarial sample; and S7, forming a sample pair by using the final first adversarial sample and the initial first image sample, wherein the sample pair is used for evaluating the robustness of the neural network image classification model.

Description

technical field [0001] Embodiments of the present invention relate to the field of neural networks, and in particular to a method and device for evaluating the robustness of a neural network image classification model. Background technique [0002] With the large-scale application of deep neural network (DNNs) models, their security and stability have also received extensive attention. In 2013, Szegedy et al. found that the deep neural network model is vulnerable to the attack of adversarial samples. This attack is based on adding small perturbations to the original image, and this perturbation does not have a large impact on the human eye, but it is It can interfere with deep neural network models to make misjudgments. Therefore, the safety and robustness evaluation of the neural network model has become very important. Contents of the invention [0003] To this end, the embodiments of the present invention provide a method and device for evaluating the robustness of a ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/62G06N3/04G06N3/08G06V10/764
CPCG06N3/084G06N3/045G06F18/2415Y02T10/40
Inventor 何召阳靳宇馨刘乃海李乾坤刘兵
Owner 北京墨云科技有限公司