Unlock instant, AI-driven research and patent intelligence for your innovation.

A deep learning model protection method without artificial noise

A technology of deep learning and artificial noise, which is applied in the field of neural network privacy security, can solve problems such as unbalanced user privacy protection, and achieve the effects of high security and practicability, low accuracy, and wide application prospects

Active Publication Date: 2021-04-16
NANJING UNIV
View PDF6 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] Purpose of the invention: Aiming at the problems and deficiencies of the imbalance between neural network performance and user privacy protection existing in the prior art, the present invention provides a deep learning model protection method that does not require additional artificial noise. This method has less user privacy Leakage Potential and Better Neural Network Performance

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A deep learning model protection method without artificial noise
  • A deep learning model protection method without artificial noise
  • A deep learning model protection method without artificial noise

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0048] The present invention will be further explained below in conjunction with the accompanying drawings and specific embodiments.

[0049] The embodiment of the present invention discloses a privacy-protected deep neural network model publishing method without additional noise. The method realizes user privacy protection based on a simple statistical method and a differential privacy mechanism. Through the method of probability distribution and score statistics, users who request model parameters cannot obtain private data according to the returned results, thus playing the role of user privacy protection.

[0050] After receiving the user query request, the solution divides the query process into two parts: statistical process and generation process. The statistical procedure uses Kernel Density Estimation (KDE), a simple classical statistical method for parameter estimation where the distribution is unknown. In this scheme, the distribution function of the parameters of ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a method for protecting a deep learning model without artificial noise, including step 1, after receiving a query request from a user, obtaining the required trained neural network model parameters П and Θ, and using a set of user neural network model parameters П={ π 1 , π 2 , π 3 ,...,π M} means that π i Indicates the model set obtained by training the same neural network for the ith time; Θ indicates the set of all parameters on different layers of the neural network after iterative training; step 2, process the input parameter data Θ by the kernel density estimation method, and obtain the probability distribution function of the parameter ; Step 3, process each parameter θ through the scoring function p(u, v) (u,v) The obtained value score, the obtained total score value P, indicates the possibility of the value; step 4, return the parameter value when the value is most likely, and the parameter value is the result of the user's query request. The invention prevents the user who requests the model parameters from obtaining the private data according to the returned result, thereby playing the role of user privacy protection.

Description

technical field [0001] The invention relates to a method for protecting a deep learning model without artificial noise, and belongs to the technical field of neural network privacy and security. Background technique [0002] In recent years, with the continuous research and development of deep neural networks, neural networks have been widely used in various fields. Many machine learning services provide customers with neural network applications by publishing neural network models, and the protection of user privacy data when publishing models has become a machine learning tool. One of the research hotspots in this field. [0003] Privacy protection schemes are mainly divided into three types: adversarial training, secure computing, and differential privacy training. [0004] Adversarial training uses both the task model and the confrontation model to train the neural network, so that the network model can effectively prevent data leakage, but it cannot prevent unknown dat...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F21/62G06N3/08
CPCG06F21/6227G06F21/6245G06N3/08
Inventor 毛云龙林宇朱博宇张渊仲盛
Owner NANJING UNIV