Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

AI model privacy protection method for resisting member reasoning attack based on adversarial sample

An anti-sample, privacy protection technology, applied in the field of AI model privacy protection against member reasoning attacks, can solve the problems of target model performance impact, lengthening target model training time, and difficulty in convergence of target model training, to eliminate gradient instability, The effect of defending against inference attacks

Inactive Publication Date: 2019-11-29
NANJING UNIV OF AERONAUTICS & ASTRONAUTICS
View PDF0 Cites 13 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the disadvantage of this method is that modifying the loss function of the target model changes the training process of the target model, making the training of the target model difficult to converge, and has a greater impact on the performance of the target model after the training is completed.
At the same time, the interactive training with the member reasoning model lengthens the training time of the target model

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • AI model privacy protection method for resisting member reasoning attack based on adversarial sample
  • AI model privacy protection method for resisting member reasoning attack based on adversarial sample
  • AI model privacy protection method for resisting member reasoning attack based on adversarial sample

Examples

Experimental program
Comparison scheme
Effect test

example 1

[0048] This example is carried out in Colab, and the deep learning framework used is Pytorch. ResNet-34 is used as the target model; the data set is CIFAR-10, and there are 60,000 32×32 3-channel (RGB) pictures in the data set, which are divided into 10 categories, of which 50,000 are used as training sets and 10,000 are used as test sets . When training the target model, set the loss function to the cross entropy function (CrossEntropyLoss), use the Adam optimization method, set the learning rate to 0.0005, and the number of iteration steps to 20 rounds. After training, the target model is 99% accurate on the training set and 56% accurate on the test set.

[0049] The membership inference model architecture for constructing adversarial examples is as follows: figure 2 As shown, it has two inputs, which are the output of the target model for a certain data and the label of the data. Among them, the dimension parameters of each layer of the three sub-networks of the predictio...

example 2

[0060] This example is carried out in the Colab environment, and the deep learning framework is Pytorch. This example uses a linear neural network that only includes input and output layers as the target model. The dataset used is the AT&T face picture set, which contains 400 pieces of 112×92 Grayscale single-channel images are divided into 40 categories, each category has 10 images, 300 images are randomly selected as the training set, and 100 images are used as the test set. The loss function of the target model is the cross entropy function, the optimization method is SGD, the learning rate is 0.001, and the iteration is 30 rounds. After training, the target model has 100% accuracy on the training set and 95% accuracy on the test set.

[0061] In this instance, the architecture of the membership reasoning model for constructing adversarial examples is as follows: Figure 1 As shown, the structure of the member inference model used to construct the adversarial example in Ex...

example 3

[0069] This example is carried out in the Colab environment, the deep learning framework is Pytorch, the target model used is VGG-16, and the data set is CIFAR-10. The data set contains 60,000 32×32 3-channel (RGB) pictures, divided into 10 categories , where the training set contains 50,000 images and the test set contains 10,000 images. When training the target model, set the loss function to the cross entropy function (CrossEntropyLoss), use the Adam optimization method, where the learning rate is set to 0.0001, and the number of iteration steps is 20 rounds. After training, the target model has an accuracy of 89% on the training set and 63% on the test set.

[0070] When training the membership inference model for constructing adversarial samples, use the first 5000 images of the test set of the target model as non-training set data Select the first 5000 pictures in the target model training set as the target model training set data T, and the training set of the member ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses AI model privacy protection method for resisting member reasoning attack based on an adversarial sample. The method comprises the following steps: (1) training a target model in a common mode; (2) obtaining a trained member reasoning model in a mode of interactive training with a target model; (3) when the target model receives an input, obtaining a target model; inputtinga prediction label vector output by the target model and a one-hot label vector obtained after the prediction label vector is subjected to one-hot coding conversion into a trained member reasoning model; disturbing the prediction label vector output by the target model by using the output of the member inference model and a rapid gradient symbolic method to construct an adversarial sample for themember inference model; and (4) outputting the adversarial sample by the target model at a probability of 50%, otherwise, keeping the original output unchanged. According to the method, the problems of gradient instability, long training time, low convergence speed and the like caused by a traditional defense mode are solved.

Description

technical field [0001] The invention belongs to the fields of computer information security and artificial intelligence security, and in particular relates to an AI model privacy protection method based on adversarial samples against member reasoning attacks. Background technique [0002] Today, machine learning models have been widely used in various fields such as image processing, natural language processing, audio recognition, driverless cars, smart medical care, and data analysis. Taking the field of data analysis as an example, many companies are using machine learning models to analyze their large-scale user data, or publishing the trained machine learning models on the Internet to provide services for others. Users can use their own data to query the model, that is, input data into the model to observe the output of the model. At the same time, some companies (such as Google, Microsoft, etc.) also provide machine learning service platforms (Machine learning as a ser...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06N20/00G06N5/04
CPCG06N5/04G06N20/00
Inventor 吴至禹薛明富刘雨薇刘雯霞
Owner NANJING UNIV OF AERONAUTICS & ASTRONAUTICS
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products