Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Anti-robustness image feature extraction method based on variational spherical projection

A technology of image feature extraction and spherical projection, applied in the field of image processing, can solve the problems of life security threats, hidden dangers of depth feature extractor, and the inability to guarantee that the threshold value of the depth feature extraction model can be separable, etc.

Active Publication Date: 2018-09-11
SOUTH CHINA UNIV OF TECH
View PDF5 Cites 12 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] It is precisely because of local instability and the existence of adversarial attacks corresponding to this nature that it brings serious security risks to the application of deep feature extractors.
For example, in automatic driving, the camera’s recognition features of road signs are deliberately attacked, resulting in misclassification, resulting in unmeasurable behavior results; another example is in the face recognition system that widely uses deep neural networks as feature extraction, deliberately attacking facial features. It will lead to the wrong authorization of the system to criminals, which will cause the user's property privacy and even the safety of life to be threatened
[0007] So far, there are generally three ways to improve the robustness of deep neural network classifiers. The first is to impose regularization constraints on the model parameters themselves. As a classifier structure; or a variational parameter encoder structure, under a large regularization parameter, the weight of each layer is too smooth, and the model expression ability is greatly reduced, resulting in the separability of the feature space and the classification performance of the classifier will drop significantly
The second is to smooth the label of the training set and learn by distillation to make the decision boundary of the model smoother, but the loss classification performance of the model
The third is adversarial training, which uses the gradient of the model to generate an adversarial sample of the original sample, and then adds it to the training set, so that the model can increase the robustness of the model without loss of classification performance. However, the existing methods cannot guarantee deep features. The extraction model is threshold separable in the feature space, not suitable for feature extraction of unseen samples

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Anti-robustness image feature extraction method based on variational spherical projection
  • Anti-robustness image feature extraction method based on variational spherical projection
  • Anti-robustness image feature extraction method based on variational spherical projection

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0078] The present invention will be further described below in conjunction with specific examples.

[0079] Such as figure 1 As shown, the anti-robust image feature extraction method based on variational spherical projection provided in this embodiment includes the following steps:

[0080] 7) Repeat the process from step 2) to step 6) until convergence to obtain a deep feature extraction model; when applying, use the mean parameter of the parameter encoding process as a feature to obtain highly distinguishable features.

[0081] 1) Model initialization, including the following steps:

[0082] Define the model structure f(·|W of the deep feature extractor f ,b f ), and the unbiased linear model g(·|W g ), where the deep feature extractor has L layers corresponding to L weight matrices and bias term where W f l Represents the weight matrix of layer l, W f L Represents the weight matrix of the last layer, Indicates the l-th layer bias term, Represents the last l...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an anti-robustness image feature extraction method based on variational spherical projection. The method comprises the following steps: 1) initializing a model; 2) preprocessing a data set; 3) performing forward broadcasting on the variational spherical projection; 4) calculating the loss function; 5) performing the adversarial training regularization; 6) calculating the gradient in the reverse direction, and updating the weight value; 7) repeating the step 2) to the step 6) until convergence is carried out to obtain a depth feature extraction model, wherein a high distinguishable characteristic can be obtained by taking the mean parameter of the parameter coding process as the characteristic during application. According to the method, training is carried out on aCASIA-webface data set, and a test is carried out on an LFW data set, the anti-robustness of the model can be guaranteed, and meanwhile, the feature has high separability.

Description

technical field [0001] The present invention relates to the technical field of image processing, in particular to a method for extracting image features with robustness against confrontation based on variational spherical projection. Background technique [0002] In recent years, the improvement of computing hardware GPU computing power and the emergence of a large number of labeled data sets have made it possible to train deep neural networks. Since the deep convolutional network won the championship of ImageNet's official large-scale visual recognition competition (ILSVRC), the deep network structure has been constantly innovating, gradually matching or surpassing human level in specific tasks. Since then, deep learning networks have been widely used in face recognition feature extraction and similar image retrieval. From the perspective of representation learning, the success of deep feature extraction lies in obtaining a significant and stable feature representation thr...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/46G06K9/62G06N3/04G06N3/08
CPCG06N3/084G06V40/168G06V40/172G06V10/48G06N3/045G06F18/2451
Inventor 沃焱谢仁杰韩国强
Owner SOUTH CHINA UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products