Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

56 results about "Inference attack" patented technology

An Inference Attack is a data mining technique performed by analyzing data in order to illegitimately gain knowledge about a subject or database. A subject's sensitive information can be considered as leaked if an adversary can infer its real value with a high confidence. This is an example of breached information security. An Inference attack occurs when a user is able to infer from trivial information more robust information about a database without directly accessing it. The object of Inference attacks is to piece together information at one security level to determine a fact that should be protected at a higher security level.

Method for protecting privacy of identity information based on sensitive information measurement

The invention discloses a method for protecting privacy of identity information based on sensitive information measurement. The method comprises the comprises the following steps of S1, determining input and output; S2, defining and calculating identity importance degree; S3, optimizing the identity importance; S4, calculating a sensitive information disclosing matrix, a minimum attack set and an information disclosing probability; S5, determining a generalizing function, and generalizing a dataset; S6, establishing a background knowledge attack-avoidance privacy protection model; S7, describing a (gamma, eta)-Risk anonymity algorithm, inputting an original dataset D, and outputting an anonymity dataset D'; S8, introducing a confidence interval, controlling the high-probability inference attack of an attacking party within the specified confidence interval, so as to avoid a user using an attribute distribution function to calculate the identity information of the user, calculate features, and perform high-probability inference attack. The method has the advantages that the problem of difficulty in effectively treating the privacy information attack based on background knowledge attack in the existing privacy protection method is solved, and the key identity and identity sensitive information are more comprehensively and effectively protected.
Owner:湖南宸瀚科技有限公司

Differential privacy recommendation method based on heterogeneous information network embedding

PendingCN111177781ALearning Probabilistic CorrelationsMitigating Privacy LeakageDigital data information retrievalDigital data protectionAttackInference attack
The invention realizes a set of differential privacy recommendation method based on heterogeneous information network embedding. The differential privacy recommendation method comprises the followingfour steps of: performing network representation learning by using HAN, and calculating heterogeneous attention sensitivity by using characterizations of HAN and an attention weight result; based on adifferential privacy definition, using the heterogeneous attention sensitivity to generate corresponding random noise, and generating a random noise matrix through using a heterogeneous attention random disturbance mechanism; constructing an objective function of differential privacy recommendation embedded with heterogeneous information for learning to obtain a prediction score matrix; and outputting the score matrix as a prediction score capable of keeping privacy. Therefore, the original scoring data is protected for the recommendation system scene under the heterogeneous information network, an attacker is prevented from improving the reasoning attack capability by utilizing the heterogeneous information network data acquired by other channels, and the original scoring data can be guessed or learned again with high probability by observing the recommendation result change of the score.
Owner:BEIHANG UNIV

Method for protecting sensitive semantic location privacy for continuous query in road network environment

The invention is applicable to the technical field of privacy protection, and provides a method for protecting sensitive semantic location privacy for continuous query in a road network environment. The method comprises the following steps: S1, receiving a location query request sent by a user; S2, judging whether the location query request is a first time location query request of the user, if the detection result is yes, constructing an anonymous user set based on the space-time similarity, and if the detection result is no, executing a step S3; S3, constructing a semantic secure anonymous area of the anonymous user set, and sending the semantic secure anonymous area to an LBS location server; S4, receiving candidate results returned by the LBS location server; and S5, filtering the candidate results based on a precise location of the user, and returning the filtered query result to the user. By adoption of the method provided by the invention, an attacker can be effectively prevented from using a semantic inference attack to attack the user location privacy under the continuous query location service; and by constructing the semantic secure anonymous area, the protection degreeof a K-anonymous algorithm on the user location privacy under the continuous query location service is enhanced.
Owner:ANHUI NORMAL UNIV

Decentralized privacy preserving reputation evaluation method for crowd sensing

The invention relates to a crowd sensing-oriented decentration privacy preservation reputation evaluation method, and belongs to the technical field of network data processing. According to the method, a reputation evaluation mechanism which can keep anonymity and can prevent a person from randomly modifying a reputation value is constructed by utilizing blind signature and ring signature technologies based on a crowd sensing scene of a block chain. Wherein the real identity of the user participating in the crowd sensing task is hidden by means of the anonymity of the block chain account, and the data demander issuing the sensing task scores the user according to the quality of the data provided by the user and issues the score on the block chain in a transaction form. A user who participates in a sensing task for multiple times allows to create a plurality of accounts, and secret transfer of a reputation value among different accounts is realized by using technologies such as ring signature and the like, so that inference attacks from a data demander are resisted. According to the identity authentication method and device, the relevance between the user account and the real identity of the user can be hidden, and the identity authentication of the user white washing attack can be resisted.
Owner:BEIJING INSTITUTE OF TECHNOLOGYGY

Location anonymous method for resisting replay attack in road network environment

The invention discloses a location anonymous method for resisting a replay attack in a road network environment. The location anonymous method for resisting the replay attack in the road network environment comprises the steps that: (1), road sections are pre-processed; the road sections are pre-sorted for one time by adoption of breadth-first sorting; and the location query cost is reduced; and (2), an anonymous set is constructed; an equivalent partition is obtained from the sorted road sections; an anonymous and equivalent road section concealment set is obtained; various road sections in the concealment set are equalized by adoption of a pseudo-user adding mechanism, so that the edge weight association degree satisfies a pre-set threshold value; and thus, a privacy threat due to an edge weight inference attack also can be prevented while the replay attack is resisted. By means of the location anonymous method for resisting the replay attack in the road network environment disclosedby the invention, the common replay attack and edge weight inference attack in the road network environment can be resisted while the concealment set can be effectively generated; and furthermore, the method is low in query cost and quick in service response.
Owner:HOHAI UNIV

Rapid model forgetting method and system based on generative adversarial network

The invention discloses a rapid model forgetting method and system based on a generative adversarial network, and the method comprises the steps: inputting third-party data with the same distribution as to-be-forgotten data into an original model, sorting the output results of the original model, obtaining a first sorting result, initializing a generator into the original model, inputting the to-be-forgotten data into the generator, and obtaining a second sorting result; and sorting results output by the generator to obtain a second sorting result, alternately training the generator and the discriminator by using the first sorting result and the second sorting result, and stopping training until the discriminator cannot distinguish the distribution difference between the output of the to-be-forgotten data on the generator and the output of the third-party data on the original model. Member reasoning attacks are carried out on the generator, if an attack result is to-be-forgotten data and is not trained by the generator, forgetting succeeds, and the trained generator is used as a forgotten model; according to the method, the speed of forgetting the data in the model can be increased, and the effect is more obvious especially in a complex scene.
Owner:GUANGZHOU UNIVERSITY

High-robustness privacy protection recommendation method based on adversarial learning

The invention provides a high-robustness privacy protection recommendation method based on adversarial learning. The method comprises the following steps: constructing a training set required for optimizing a neural collaborative filtering model and a reference set required for training a member reasoning model; designing a neural collaborative filtering joint model with member reasoning regular terms, and performing iterative optimization of a confrontation training mode on the joint model by using the training set and the reference set to obtain a robust user and article feature representation matrix; predicting an unobserved score according to the obtained user feature matrix and the article feature matrix; and recommending the corresponding item set which is relatively high in prediction score and does not generate behaviors to the corresponding user. According to the invention, a unified minimum and maximum objective function is designed in an adversarial training mode to explicitly endow the recommendation algorithm with the ability of defending member reasoning attacks, so that the member reasoning attacks can be defended, overfitting of the recommendation model can be relieved, and bidirectional improvement of personalized recommendation model algorithm performance and training data privacy protection is realized.
Owner:BEIJING JIAOTONG UNIV

Depth model privacy protection method and device oriented to member reasoning attack and based on parameter sharing

The invention discloses a depth model privacy protection method and device oriented to member reasoning attack and based on parameter sharing. The method comprises the following steps: constructing a target model, and optimizing the network parameters of the target model through an image sample; after optimization is finished, carrying out clustering processing on each layer of network parameters of the target model, and after the network parameters belonging to the same class are replaced by the network parameter average value of the class cluster to which the network parameters belong, the network parameters are optimized; constructing a shadow model having the same structure as the target model, and optimizing network parameters of the shadow model by using the training image sample; constructing a new image sample according to the shadow model; constructing an attack model, and optimizing model parameters of the attack model by using the new image sample; and obtaining a prediction confidence coefficient of the input test image by using the parameter-shared enhanced target model, inputting the prediction confidence coefficient into the parameter-optimized attack model, obtaining a prediction result of the attack model through calculation, and judging whether the test image is a member sample of the target model or not according to the prediction result.
Owner:ZHEJIANG UNIV OF TECH

Member reasoning attack-oriented deep model privacy protection method based on abnormal point detection

The invention discloses a member reasoning attack-oriented deep model privacy protection method based on abnormal point detection, which comprises the following steps of: relieving the overfitting degree of a target model through a regularization method, finding out abnormal samples which are easily attacked by member reasoning on the model, and deleting the samples from a training set of the target model, so as to improve the privacy of the target model; and finally, re-training the target model to achieve a defense effect. In order to determine samples susceptible to member reasoning attacks, the method comprises the following steps: firstly, establishing a reference model training set, establishing a reference model, training the reference model by using the reference model training set, inputting a to-be-tested sample into the reference model, obtaining feature vectors of the to-be-tested sample, and determining the distance between the feature vectors of different to-be-tested samples; and calculating a local outlier factor of the to-be-detected sample, wherein the sample with the local outlier factor greater than 1 is an abnormal sample. By utilizing the method, the problems of unstable gradient, non-convergent training, low convergence speed and the like of a traditional defense method can be eliminated, and relatively good defense performance is achieved.
Owner:ZHEJIANG UNIV OF TECH

Data set authentication method and system based on machine learning member inference attack

The invention discloses a data set authentication method and system based on a machine learning member inference attack, and belongs to the field of data protection of the Internet of Things, and the method comprises the steps: selecting a plurality of machine learning models after obtaining a target data set and an auxiliary data set, and respectively constructing reference model groups based on the two data sets; predicting the target data set by using the two types of reference model groups to obtain a member prediction set and a non-member prediction set; taking the member prediction set and the non-member prediction set as features, taking corresponding member attributes as labels, and training to obtain an authentication model; performing member inference attack on all data in the member prediction set by using the authentication model, and screening member fingerprint data from the target data set; and based on the authentication model, obtaining the probability that the member fingerprint data is member data of the suspicious model, thereby determining whether the suspicious model is obtained by training an Interne-of-Things data set. Therefore, the interests and privacy of the data owner can be effectively protected.
Owner:HUAZHONG UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products