Conditional adversarial sample-based model poisoning method and system

A technology against samples and conditions, applied in neural learning methods, biological neural network models, computer components, etc., can solve the problems of data defense, invalid poisoning, and model performance degradation, etc., to reduce the amount of poisoning and performance Decreases and strengthens the effect of concealment

Active Publication Date: 2022-07-15
COMP APPL RES INST CHINA ACAD OF ENG PHYSICS
View PDF2 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] In view of the above-mentioned research problems, the purpose of the present invention is to provide a model poisoning method based on conditional confrontation samples, which solves the situation that the existing technology is prone to invalid poisoning, thus causing data to be defended and the performance of the model cannot be reduced

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Conditional adversarial sample-based model poisoning method and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0038] The present invention will be further described below with reference to the accompanying drawings and specific embodiments.

[0039] This case proposes a model poisoning method based on conditional adversarial examples. Specifically, the adversarial perturbation is divided into two perturbations, a and b. The purpose is to make the original sample show the characteristics of an adversarial sample after adding perturbation b, and show the characteristics of a normal sample when a and b are added at the same time. Through the addition of a and b, the generated conditional adversarial samples show normal classification in the classification model, but the classification logic is quite different from the normal training samples. Adding such conditional adversarial samples to the training process of the detection model will lead to a decrease in the efficiency of the detection model training classification features, and even failure to learn the classification features corre...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a model poisoning method and system based on conditional adversarial samples, belongs to the technical field of artificial intelligence security, and solves the problems that data is defended and the performance of a model cannot be reduced due to the fact that invalid poisoning is likely to occur in the prior art. According to the method, a training data set is obtained, the training data set comprises a plurality of subsets of different categories, and each subset comprises a plurality of normal samples; randomly selecting one subset or multiple subsets in the training data set, and initializing two disturbances for each normal sample in the subset; detecting the normal sample and the two disturbances based on a pre-trained detection model, if the requirement is met, obtaining the sum of the normal sample and the two disturbances, namely a conditional adversarial sample, and if the requirement is not met, updating the disturbances and then executing the step 3 again; and replacing the normal samples of the conditional adversarial samples with the conditional adversarial samples, obtaining a new training data set after replacement is completed, and training the detection model based on the new training data set. The method is used for model poisoning.

Description

technical field [0001] A model poisoning method and system based on conditional confrontation samples are used for model poisoning and belong to the field of artificial intelligence security technology. Background technique [0002] Model poisoning attacks exist in the model training phase, and the objects can include various deep learning models, such as malware detection models, image classification models, and face recognition models. This can lead to a significant reduction in the performance of the target model. On the other hand, by poisoning the data, the data can also be copyrighted to avoid unauthorized use. [0003] As deep learning methods of deep neural networks have shown performance exceeding traditional methods in many fields, deep learning models are widely used in various scenarios. At the same time, the training of existing deep learning models largely relies on a large number of training datasets, so the collection of training samples is an important pro...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/08G06N3/04G06V10/774G06V10/82G06V10/764G06V10/96G06K9/62
CPCG06N3/084G06N3/047G06F18/214G06F18/2415
Inventor 刘小垒胥迤潇辛邦洲王玉龙杨润殷明勇
Owner COMP APPL RES INST CHINA ACAD OF ENG PHYSICS
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products