3D model retrieval method and 3D model retrieval apparatus based on slow increment features

A slow-increment feature and 3D technology, applied in character and pattern recognition, special data processing applications, instruments, etc., can solve problems such as sudden changes in feature forms, achieve the effects of reducing difficulty, efficient and accurate retrieval results, and improving matching efficiency

Active Publication Date: 2016-02-10
TIANJIN UNIV
3 Cites 13 Cited by

AI-Extracted Technical Summary

Problems solved by technology

Ideally, with the gradual change of the viewing angle, the change of visual features should als...
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Method used

As can be seen from Figure 2, under the premise of the same retrieval mode (NN), the feature of this method is obviously higher than the SFA feature and Zernike feature. This is because incremental SFA is more suitable for unstable environments compared with ordinary SFA features. By iteratively processing data, the relationship between different perspectives of each model is guaranteed, the calculation amount of model matching is reduced, and the matching rate is increased. ; Compared with Zernike features, this method solves the phenomenon of sudden changes in the form of visual features and greatly improves the retrieval performance. Experimental results verify the feasibility and superiority of this method.
[0105] Without loss of generality, the ne...
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Abstract

The present invention discloses a 3D model retrieval method and a 3D model retrieval apparatus based on slow increment features. The method comprises: carrying out slow increment feature extraction to a preprocessed view set by applying a supervised slow increment feature analysis method; acquiring a sorting result of the slow increment features according to the extracted slow increment features, screening the slow increment features according to the sorting result and generating a slow increment feature library of a 3D model; and carrying out retrieval matching on the slow increment feature library of the 3D model by using a nearest neighbor algorithm to acquire and output an object that is similar to a candidate model. The apparatus comprises: an extraction module, an acquisition module, a generation module and a matching and outputting module. According to the method and the apparatus, the feature extraction difficulty of a nonrigid model is reduced, the stability and accuracy of feature extraction are improved, a good condition is provided for the subsequent 3D model retrieval, and the retrieval result is guaranteed to be more efficient and accurate.

Application Domain

Technology Topic

Image

  • 3D model retrieval method and 3D model retrieval apparatus based on slow increment features
  • 3D model retrieval method and 3D model retrieval apparatus based on slow increment features
  • 3D model retrieval method and 3D model retrieval apparatus based on slow increment features

Examples

  • Experimental program(4)

Example Embodiment

[0048] Example 1
[0049] In order to make model retrieval more accurate, it can improve the efficiency of model retrieval and reduce the influence of external factors on the visual characteristics of the view, see figure 1 , This method includes the following steps:
[0050] 101: Use supervised incremental slow feature analysis method [6] , Perform incremental slow feature extraction on the preprocessed view set;
[0051] 102: Obtain the sorting of the incremental slow features according to the extracted incremental slow features, filter the incremental slow features according to the sorting results, and generate a 3D model incremental slow feature library;
[0052] 103: Use the nearest neighbor algorithm to search and match the incremental slow feature library of the 3D model, and obtain and output objects similar to the candidate model.
[0053] Wherein, the method further includes: obtaining the 2D view set V of the objects in the database, and preprocessing the 2D view set to make the view sizes of all 3D models consistent.
[0054] Among them, the step of using the supervised incremental slow feature analysis method in step 101 to perform incremental slow feature extraction on the preprocessed view set is specifically:
[0055] Obtain the minor component of the kth view of the 3D model according to the difference signal, the first eigenvalue of the eigenvector of the covariance matrix, and the first minor component of the kth view of the 3D model;
[0056] Obtain the incremental slow feature estimation through the secondary components of the 3D model;
[0057] Through the principal component of each view and the incremental slow feature estimation of each view, multiple slow incremental features are obtained.
[0058] Further, the differential signal is obtained by the principal component z(k) of the kth view of a 3D model and the principal component z(k-1) of the k-1th view.
[0059] Among them, the principal component is obtained by whitening and dimensionality reduction on the eigenvector of the covariance matrix; the acquisition of the eigenvector of the covariance matrix includes:
[0060] Perform non-linear expansion of input data to generate expanded data;
[0061] Calculate the zero mean of the extended data, and then obtain the eigenvector of the covariance matrix of the input data through intuitive non-covariance incremental principal component analysis.
[0062] In summary, the embodiment of the present invention reduces the difficulty of non-rigid model feature extraction through the above steps 101 to 103, improves the stability and accuracy of feature extraction, and provides good conditions for subsequent 3D model retrieval to ensure The search results are more efficient and accurate.

Example Embodiment

[0063] Example 2
[0064] The solution in Embodiment 1 will be described in detail below in conjunction with specific calculation formulas and examples, as detailed below:
[0065] 201: Obtain a 2D view set V of objects in the database;
[0066] This method mainly applies retrieval technology based on image comparison, that is, 3D models are collected from multiple perspectives to form 2D view sets, and mature 2D technologies are used to extract features of objects. Therefore, each 3D model is represented by multiple views, so the view set can be expressed as Where v i Represents the view set of the i-th object; D represents the feature dimension of the view; f k Represents the k-th perspective of an object; N represents the number of 3D models; M represents the number of views of each 3D model; Indicates the scope of each object's view collection.
[0067] 202: Perform standardized preprocessing on the view set to make the view sizes of all 3D models consistent;
[0068] In order to facilitate subsequent feature extraction, the data will be pre-processed in a standardized manner to make the size of the view set data consistent. In the embodiment of the present invention, the 2D view size s×s is uniformly set to 25×25 for description, but when specific When implemented, the implementation method of the present invention does not impose any restrictions on the size specification of the view and the scale conversion method.
[0069] At the same time, when the view set is large and the size of each view is too large, it is recommended to choose the data size reasonably, which can prevent the disaster of dimensionality and increase the data processing rate to get the best results.
[0070] 203: Use the supervised incremental slow feature analysis method to extract the incremental slow feature of the view set, and at the same time obtain the order of the incremental slow feature change size, and obtain the incremental slow feature library of the 3D model according to the sorting result;
[0071] Incremental slow feature analysis method [7] It mainly includes two kinds, (1) unsupervised incremental slow feature analysis; (2) supervised incremental slow feature analysis. Unsupervised incremental slow feature analysis refers to putting all sample sequences together, learning the incremental slow feature function to obtain the incremental slow feature model, and then classifying all models; while supervised incremental slow feature analysis refers to Different sample sequences are respectively subjected to the learning of the incremental slow feature function to directly obtain different models. The embodiment of the present invention uses the supervised incremental slow feature analysis method to obtain the supervised incremental slow feature.
[0072] Among them, the steps of the supervised incremental slow feature analysis method are as follows:
[0073] 1) Enter a 3D model v d 2D view of, denoted as x(k)=[x 1 (k),…,x D (k)) T;
[0074] Among them, x(k) is the 2D model data of the kth view of a 3D model, that is, the kth view of a 3D model; x D (k) is the D-th dimension feature of the 2D model of a perspective; T represents matrix transposition, the value range of k is [1, M], and M represents the number of views used to describe each 3D model.
[0075] 2) Perform non-linear expansion of input data x(k) to generate expanded data;
[0076] h(x)=[x 1 ,...,X D ,x 1 x 1 ,x 1 x 2 ,...,X D x D ] To generate extended data h(x(k)):
[0077] h(x(k))=[x 1 (k),…,x D (k),x 1 (k)x 1 (k),x 1 (k)x 2 (k),…,x D (k)x D (k))(1)
[0078] Among them, h(x) is the nonlinear expansion function; x D (k) is the D-dimensional feature of the k-th view of a 3D model; D is the feature dimension of each 2D view; h(x(k)) is the extended data of the k-th view.
[0079] 3) Find the zero mean u(k) of the extended data h(x(k)), and then obtain the eigenvector v of the covariance matrix of the input data through intuitive non-covariance incremental principal component analysis (CCIPCA) d (k);
[0080] u ( k ) = h ( x ( k ) ) - h ‾ ( x ( k ) ) - - - ( 2 )
[0081] Among them, h(x(k)) is the extended data of the k-th view; Is the average value of the k-th view extended data, u(k) is the zero mean value of the k-th view data. v d (k) is the eigenvector of the covariance matrix of the input data, that is, the eigenvector of the d-th principal component covariance matrix of the k-th view, and its eigenvalue is λ d (k), feature vector v d (k) and eigenvalue λ d (k) Meet the following formula:
[0082] E[u(k)u(k) T ]v d (k)=λ d (k)v d (k)(3)
[0083] Among them, the feature vector v d (k) is orthogonal, and the eigenvalue satisfies λ 1 (k)≥λ 2 (k)≥...≥λ d (k). Through the calculation of formula (2), the zero mean value u(k) of the input data of each view can be obtained, then formula (3) can be rewritten as:
[0084] λ d (k)v d (k)=E[(u(k)·v d (k))u(k))(4)
[0085] Where v d (k) is the eigenvector of the covariance matrix of the d-th principal component of the k-th view, the value range of d is [1, J], J represents the number of eigenvectors of the principal component covariance matrix, u 1 (k) is the first input data x of the kth view 1 The zero mean of (k).
[0086] Initialize v d (k)=u 1 (k)=u(k), η represents the learning rate of the slow feature, this experiment defines η=0.005, during the specific experiment, it can be adjusted according to the experimental situation. The final intuitive principal component without covariance can be calculated iteratively by formulas (5) and (6):
[0087] v d ( k ) = ( 1 - η ) v d ( k - 1 ) + η [ u d ( k ) v d ( k - 1 ) | | v d ( k - 1 ) | | u d ( k ) ] - - - ( 5 )
[0088] u d ( k ) = u d + 1 ( k ) + ( u d T ( k ) v d ( k ) | | v d ( k ) | | ) v d ( k ) | | v d ( k ) | | - - - ( 6 )
[0089] Where v d (k-1) is the eigenvector of the covariance matrix of the d-th principal component of the k-1th view, that is, the kth view is related to the eigenvector of the previous view, that is, the k-1th view; u d (k) is the zero mean value of the d-th dimension feature data of the k-th view data; u d+1 (k) is the zero mean value of the d+1 dimension feature data of the kth view, that is, the zero mean value of the latter one dimension feature data of each view, that is, the zero mean value of the d+1 dimension feature data, and the current feature data The zero mean value is related to the zero mean value of the d-th feature data.
[0090] 4) For the eigenvector v of the covariance matrix d (k) Perform whitening and dimensionality reduction to obtain principal components: z(k)=V(k)F(k)u(k);
[0091] Among them, z(k) is the principal component of the k-th view of a 3D model, creating a diagonal matrix λ d (k) is the eigenvector v of the covariance matrix d (k) characteristic value; Use formula (5) to obtain, that is, the eigenvector v of the covariance matrix of a J-dimensional 2D view d The sum of (k), J≤D, that is, the number of principal component feature vectors J is less than the feature dimension D of the input view.
[0092] 5) Obtain the differential signal through the principal component z(k) of the kth view of a 3D model and the principal component z(k-1) of the k-1th view The formula is as follows:
[0093] z · ( k ) = z ( k ) - z ( k - 1 ) - - - ( 7 )
[0094] among them, Is the difference signal of the principal component of the k-th view of a 3D model; z(k-1) is the principal component of the k-1th view of a 3D model.
[0095] 6) According to the differential signal The first eigenvalue of the eigenvector of the covariance matrix λ 1 (k), the first minor component C of the k-th view of the 3D model 1 (k), get the minor component C of the k-th view of the 3D model d (k), through the minor component analysis (MCA) of the 3D model, obtain the incremental slow feature estimate w d (k);
[0096] initialization For each d=1,...,J, let Then, use formulas (8) and (9) to update the minor components:
[0097] C d ( k ) = C 1 ( k ) + λ 1 ( k ) X d = 1 J w d ( k ) w d T ( k ) C 1 ( k ) - - - ( 8 )
[0098] w d ( k ) = 1.5 w d ( k - 1 ) - ηC d ( k ) w d ( k - 1 ) - η ( w d T ( k - 1 ) w d ( k - 1 ) ) w d ( k - 1 ) - - - ( 9 )
[0099] among them, Principal component differential signal for the kth view of a 3D model The transposition; C 1 (k) is the first minor component of the k-th view of a 3D model; C d (k) is the d-th minor component of the k-th view of a 3D model; λ 1 (k) The first eigenvalue of the eigenvector of the principal component covariance matrix; w d (k) is the d-th incremental slow feature estimation of the k-th view of a 3D model, Is w d (k) the transposition; w d (k-1) is the d-th incremental slow feature estimation of the k-1th view of a 3D model; Is w d The transposition of (k-1), that is, the incremental slow feature estimation of each view of each 3D model is related to the incremental slow feature estimation of the previous view.
[0100] 7) Estimate w from the principal component z(k) of each view and the incremental slow feature of each view d (k) Acquire multiple slow incremental features and the sort of the change size of the incremental slow features, and the final output of the incremental slow feature is as follows:
[0101] y(k)=z(k)W(k)(10)
[0102] among them, That is, the J incremental slow feature estimates of a view w d The sum of (k), y(k) is the final incremental slow feature output. Repeat step 1) to step 7), input all 3D models, and obtain the slow feature library of 3D models. In this experiment, set the number of incremental slow features J=400, then each 3D model will obtain 400 incremental slow features, but in specific experiments, the selection of the number of incremental slow features is set by the experimenter.
[0103] 204: Use the nearest neighbor algorithm to search and match the incremental slow feature library of the 3D model, and obtain and output objects similar to the candidate model.
[0104] Randomly select a 2D view as the candidate model Q from the incremental slow feature library of the 3D model, and then select a 2D view as the input model P. The retrieval task matches the candidate model Q with the input model P, and finally selects the 3D model Find objects similar to the candidate model Q in the incremental slow feature library. Commonly used methods of model matching include nearest neighbor algorithm, Hausdorff distance, and weighted bipartite graph matching.
[0105] Without loss of generality, the nearest neighbor algorithm (Nearestneighbor, NN) is used for registration, that is, the ratio of the nearest neighbor feature point distance of the sample feature point to the second nearest neighbor feature point distance is used to match the feature points. The nearest neighbor feature point refers to the feature point with the shortest Euclidean distance from the sample feature point, and the second nearest neighbor feature point refers to the feature point with the Euclidean distance slightly longer than the nearest neighbor distance. Using the nearest neighbor to second nearest neighbor ratio to match feature points can achieve good results, so as to achieve stable matching. The specific steps are as follows:
[0106] Apply the following formula (11) to process the incrementally slow feature learning data, and calculate the feature point distance between different 2D images:
[0107]
[0108] Where y i And y j Two different 2D images representing a 3D model, S 1 (y i ,y j ) Represents a 2D image y i And y j The similarity between For y i Feature mapping function, For y j Feature mapping function. According to S 1 (y i ,y j ), using formula (12) to calculate the similarity of different 3D models, that is, the smallest feature point distance.
[0109] S 2 ( P , Q ) = m i n 1 i n , 1 j m S 1 ( y i , y j ) - - - ( 12 )
[0110] Among them, S 2 (P,Q) represents the similarity between model P and Q, n represents the number of 2D views of 3D model P, m represents the number of 2D views of 3D model Q, because the previous database is preprocessed, so n is equal to m . The retrieval model with the highest similarity can be calculated with the following formula:
[0111] Q * =argmaxS 2 (Q i ,P)(13)
[0112] Among them, Q * Represents the retrieval model with the highest similarity, Q i Represents a candidate model, P is the input model, S 2 (Q i ,P) represents the similarity between the candidate model and the input model [8]. Finally, the matching probabilities of the query target and all models in the multi-view model library are sorted in descending order to obtain the final retrieval result. In summary, the embodiment of the present invention reduces the difficulty of non-rigid model feature extraction through the above steps 201 to step 204, improves the stability and accuracy of feature extraction, and provides good conditions for subsequent 3D model retrieval to ensure The search results are more efficient and accurate.

Example Embodiment

[0113] Example 3
[0114] In this experiment, the embodiment of the present invention uses the existing, shared online, and more commonly used Zurich Federal Institute of Technology (German name Experiments were performed on the Technische Hochschule Zürich (ETH) database and the Taiwan University of China (NTU) database. The ETH database is relatively small and the model is more standardized. It includes 80 3D models, a total of 8 categories and 10 objects in each category, which are apples and cars. , Cow, cup, puppy, horse, pear, tomato. The NTU database has a total of 549 objects, 47 categories, and the number of objects in each category varies. The database is a virtual model database. Images are captured through 3D-MAX. This laboratory uses 60 virtual cameras to acquire images from different perspectives. , Each object obtains 60 different perspective views, where the number of virtual cameras, that is, the shooting angle, can be set according to experimental requirements, and the embodiment of the present invention does not make specific restrictions.
[0115] The embodiment of the present invention applies the algorithm of incremental slow feature to the feature extraction of 3D model retrieval for the first time, and obtains very good results through corresponding model retrieval and model evaluation. See details figure 2.
[0116] Comparison algorithm
[0117] In the experiment, this method is compared with the following two methods:
[0118] SFA (SlowFeatureAnalysis), also known as slow feature algorithm;
[0119] Zernike moment is one of the feature descriptors of an image, which can represent the basic characteristics of the image, and Zernike moment has been proved to be invariant in the translation, zooming and rotation of the view. Compared with other moments, it is more suitable for image feature comparison. It has been used in various target recognition and model analysis.
[0120] Evaluation method
[0121] Recall rate and precision rate are important concepts in the field of information retrieval. Both are indicators of commonly used evaluation algorithms and can clearly and accurately reflect retrieval results. 3D model retrieval performance evaluation methods include average recall (AverageRecall, AR) and average precision (AveragePrecision, AP) evaluations, and these values ​​range from [0,1]. AR and AP formulas are as follows
[0122] A R = R d R d + R m - - - ( 14 )
[0123] A P = R d R d + R f - - - ( 15 )
[0124] Where R m Indicates that nothing has been retrieved and is relevant, R d Indicates retrieved and related, R f Indicates retrieved but not relevant. Without loss of generality, use the exact-find curve [8] (Precision-RecallCurve) to measure the retrieval performance of this method. The exact-finding curve is one of the important indicators for the performance evaluation of 3D target retrieval. The average recall rate is the abscissa and the average precision is the ordinate. The larger the area enclosed by the abscissa and the ordinate indicates this method. The better the performance.
[0125] Experimental results
[0126] From figure 2 It can be seen that under the premise of the same retrieval method (NN), the characteristics of this method are significantly higher than the SFA and Zernike characteristics. This is because incremental SFA is more suitable for unstable environments than ordinary SFA features. Data is processed in an iterative manner to ensure the relationship between different perspectives of each model, reduce the amount of calculation for model matching, and increase the matching rate. ; Compared with the Zernike feature, this method solves the phenomenon of the visual feature form mutation and greatly improves the retrieval performance. The experimental results verify the feasibility and superiority of this method.
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Similar technology patents

Classification and recommendation of technical efficacy words

  • Reduce difficulty
  • Improve accuracy and stability

High dynamic-range image synthesis method and device

ActiveCN102420944AReduce difficultyReduce the number of imagesTelevision system detailsColor television detailsImage alignmentImage synthesis
Owner:芯鑫融资租赁(厦门)有限责任公司

Building block type aircraft carrier

InactiveCN101259876AReduce difficultySpeed up productionAircraft carriersAirplaneTakeoff
Owner:左学禹
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products