Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Discriminative association maximization hash-based cross-mode retrieval method

A cross-modal, discriminative technology, applied in character and pattern recognition, special data processing applications, instruments, etc., can solve the problem of not taking into account the discriminative distribution of data features, and achieve time reduction, easy classification, and improved performance. Effect

Active Publication Date: 2017-11-28
SHANDONG NORMAL UNIV
View PDF6 Cites 28 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] Although there are a variety of hash-based cross-media retrieval methods, the existing methods do not consider the discriminative distribution of data features

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Discriminative association maximization hash-based cross-mode retrieval method
  • Discriminative association maximization hash-based cross-mode retrieval method
  • Discriminative association maximization hash-based cross-mode retrieval method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0068] This embodiment provides a cross-modal retrieval method based on discriminative association maximization hash, such as image 3 shown, including the following steps:

[0069] Step 1: Obtain a training data set, where each sample includes paired image and text modal data;

[0070] Step 2: Multimodal extraction is performed on the training data set to obtain the training multimodal data set O train ;

[0071] Step 3: For training multimodal dataset O train , to construct the objective function based on the discriminative association maximization hash on the data set;

[0072] Step 4: Solve the objective function, and obtain the projection matrix W of the projection of images and texts to the common Hamming space 1 and W 2 , the joint hash code B of the image-text pair and the classifier matrix Q, using the joint hash code B as the hash code of the pair of images and text;

[0073] Step 5: Obtain the test data set and perform multimodal extraction on it to obtain the...

Embodiment 2

[0129] According to the above cross-modal retrieval method based on discriminative correlation maximization hash, this embodiment provides a corresponding objective function construction method, such as figure 2 shown, including:

[0130] Step 1: Obtain a training data set, wherein each sample includes two modal data of paired images and text; multimodal extraction is performed on the training data set to obtain a training multimodal data set O train ;

[0131] Step 2: Project the data of the two modalities from the original heterogeneous space into the common Hamming space and maximize the association between pairs of images and texts in one sample;

[0132] Step 3: Perform linear discriminant analysis processing on the text modality data and transfer its characteristics to the image modality data;

[0133] Step 4: Convert the two modal data features into a hash code, and minimize the quantization loss of the hash code obtained through the hash function;

[0134] Step 5: ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a discriminative association maximization hash-based cross-mode retrieval method. The method comprises the steps of performing multi-mode extraction on a training data set to obtain a training multi-mode data set; for the training multi-mode data set, building a discriminative association maximization hash-based target function on the data set; solving the target function to obtain a projection matrix, projected to a common Hamming space, of images and texts, and combined hash codes of image and text pairs; for a test data set, projecting the test data set to the common Hamming space, and performing quantization through a hash function to obtain hash codes of samples of the training set; and performing cross-mode retrieval based on the hash codes. According to the method, the cross-media retrieval efficiency and accuracy are improved.

Description

technical field [0001] The invention relates to the field of data retrieval, in particular to a cross-modal retrieval method based on discriminative association maximization hash. Background technique [0002] With the development of science and technology, a large amount of multimodal data has poured into the Internet. In order to retrieve useful information from the Internet, a series of information retrieval technologies have emerged. Traditional information retrieval technology is based on single mode, that is, the input query data and the retrieved results are of the same mode. This makes information retrieval very limited, so we hope to extend single-modal information retrieval to cross-modal information retrieval, that is, given a picture, retrieve the text description related to the picture, and vice versa. [0003] Because the data of different modalities have different characteristics, it is almost impossible to directly measure the similarity between the two, wh...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F17/30G06K9/62
CPCG06F16/2228G06F18/214
Inventor 张化祥卢旭万文博刘丽郭培莲任玉伟孙建德王强
Owner SHANDONG NORMAL UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products