Matrix decomposition cross-model Hash retrieval method on basis of cooperative training

A matrix decomposition and co-training technology, applied in the field of image processing, can solve the problems of low discriminative hash coding, affecting retrieval accuracy, and inability to effectively maintain the similarity between modalities and the similarity within modalities at the same time. The effect of mutual retrieval performance and improving mutual retrieval accuracy

Active Publication Date: 2017-05-31
XIDIAN UNIV
View PDF3 Cites 34 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, in the case that accurate class label information is not easy to obtain in practice, it is impossible to effectively maintain the similarity betwee

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Matrix decomposition cross-model Hash retrieval method on basis of cooperative training
  • Matrix decomposition cross-model Hash retrieval method on basis of cooperative training
  • Matrix decomposition cross-model Hash retrieval method on basis of cooperative training

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0037]In the era of big data, the acquisition and processing of information is very important, and retrieval technology is a key step, especially in the context of the emergence of a large number of various modal data, how to effectively retrieve is also the key to information utilization. The existing cross-modal hash retrieval methods cannot effectively maintain the inter-modal and intra-modal similarity at the same time when the class label information is not easy to obtain in practice, and the retrieval accuracy is also affected. In response to this problem, the present invention has carried out innovative research, and proposed a matrix decomposition cross-modal hash retrieval method based on collaborative training, see figure 1 , the entire hash retrieval process includes the following steps:

[0038] (1) Obtain the original data, the original data set includes the training data set and the test data set, normalize the training data of the original data set, and obtain t...

Embodiment 2

[0060] The matrix decomposition cross-modal hash retrieval method based on collaborative training is the same as embodiment 1, the neighbor graph of the construction training data described in step (3), obtains the neighbor relationship of the training data, and proceeds as follows:

[0061] (3a) Each row of the normalized image training data matrix is ​​regarded as a vector as an image data, and the Euclidean distance d between every two vectors is obtained;

[0062] (3b) Sort the Euclidean distance d, and for each image data, take out the Euclidean distances of its k nearest neighbors, and save it into a symmetrical adjacency matrix W 1 , the value range of k is [10, 50]. When the value of k is large, the accuracy will be improved but the amount of calculation will be increased. The value of k is also related to the amount of data in the retrieved system. In this example, the number of neighbors k is 10;

[0063] (3c) Calculate the image data adjacency matrix W 1 The Lapla...

Embodiment 3

[0068] The matrix decomposition cross-modal hash retrieval method based on collaborative training is the same as embodiment 1-2, wherein the process of obtaining the objective function in step (4) includes:

[0069] (4a) Image training data X respectively (1) and text training data X (2) Perform matrix decomposition, construct matrix decomposition reconstruction error term Where||·|| F Represents the F-norm of the matrix, U 1 , U 2 are the base matrices of image data and text data respectively, V is the same coefficient matrix of paired image and text data under the base matrix, α is the balance parameter between the two modalities, take α=0.5, the two modal The data contribute equally to the objective function.

[0070] (4b) Since the training data X t The hash coding of is obtained by quantizing the low-dimensional representation coefficient V, so the linear projection reconstruction error term is constructed Get the linear projection matrix W of the training data ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a cross-model Hash retrieval method on the basis of cooperative training and matrix decomposition. By the aid of the cross-model Hash retrieval method, the similarity between models and the internal similarity of the models can be effectively constrained for unlabeled cross-model data. The cross-model Hash retrieval method includes implementation steps of acquiring original data and carrying out normalization processing on the original data; carrying out cooperative training to obtain constraints between the models; acquiring internal constraints of the models by the aid of neighbor relations; decomposing training data matrixes and adding the constraints between the models and the internal constraints of the models into the training data matrixes to obtain objective functions; carrying out alternate iteration to obtain expressions of basis matrixes, coefficient matrixes and projection matrixes; carrying out quantization to obtain Hash codes of training data sets and test data sets; computing the Hamming distances between every two Hash codes of the data sets; sorting the Hamming distances to obtain retrieval results. The cross-model Hash retrieval method has the advantages that constraints on the similarity between the models of the cross-model data can be obtained by the aid of cooperative training processes, accordingly, the image and text mutual retrieval performance can be improved, and the cross-model Hash retrieval method can be used for picture and text mutual search service of mobile equipment, internets of things and electronic commerce.

Description

technical field [0001] The invention belongs to the technical field of image processing, and relates to fast mutual retrieval of large-scale image data and text data, specifically a matrix decomposition cross-modal hash retrieval method based on collaborative training, which can be used in Internet of Things, e-commerce and mobile devices, etc. image and text mutual search service. Background technique [0002] In recent years, with the rapid development of technologies such as mobile devices, the Internet, and cloud computing, the information society has entered the era of big data. A large amount of data in different modalities, such as images, texts, audios, and videos, is rapidly emerging, and as a medium of information transmission, it has penetrated into all aspects of people's lives. Big data is changing people's work and lifestyle, and it is also affecting the mode of scientific research. Today, with the rapid development of information technology, the application ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06F17/30
CPCG06F16/325G06F16/33G06F16/50
Inventor 王秀美张婕妤高新波王笛李洁邓成王颖田春娜韩冰
Owner XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products