Hash cross-modal information retrieval method based on dictionary pair learning

An information retrieval, cross-modal technology, applied in digital data information retrieval, character and pattern recognition, electrical digital data processing, etc., can solve the problem of reducing retrieval accuracy, difficult to obtain consistent hash code, difficult to obtain code and other problems, to achieve the effect of high average retrieval accuracy, overcoming computing resources, and fast retrieval speed

Pending Publication Date: 2020-11-24
XIDIAN UNIV
View PDF3 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This method directly maps data of different modalities to Hamming space, and introduces class label constraints and discrete constraints when learning hash codes. Although this method has good retrieval efficiency, there are still shortcomings in this method. The disadvantage is that this method directly maps data of different modalities to Hamming space, ignoring the heterogeneity of data of different modalities. Specifically, data of different modalities has different representation methods and different data dimensions. It is difficult to obtain a consistent hash code when it is directly mapped to the Hamming space, thus reducing the retrieval accuracy
However, the shortcomings of this method are: 1. This method directly maps data of different modalities to Hamming space, ignoring the heterogeneity of data of different modalities, and it is difficult to obtain a consistent hash code when calculating the hash code. 2. Although the introduction of graph constraints can maintain the similarity of the original data in the hash code, thereby improving the retrieval accuracy, it also greatly increases the computational complexity, which is not conducive to large-scale Applications of scale data

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Hash cross-modal information retrieval method based on dictionary pair learning
  • Hash cross-modal information retrieval method based on dictionary pair learning
  • Hash cross-modal information retrieval method based on dictionary pair learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0026] Attached below figure 1 The present invention is further described.

[0027] Step 1, generate a training set.

[0028] Randomly select modal data of two different physical forms and the category label matrix of the items corresponding to each modal, and homogenize the feature matrices of the two modal data to form a training set. Among them, the two modal data The label matrix of the model is consistent, and the data volume of the two selected modal data is the same.

[0029] Embodiments of the present invention take text and image modes as examples for description.

[0030] Step 2, construct the hash objective function Q based on dictionary pair learning.

[0031]

[0032]

[0033] Among them, min( ) means to take the minimum value operation, ||·|| F Indicates the operation of taking the F norm, Y indicates the label matrix, W indicates the linear classifier to be obtained by optimizing the objective function Q through the optimal direction method, (·) T Repre...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a hash cross-modal information retrieval method based on dictionary pair learning. The hash cross-modal information retrieval method comprises the following steps of: (1) generating a training set; (2) constructing a hash objective function based on dictionary pair learning; (3) optimizing the hash objective function based on dictionary pair learning; (4) calculating a hashcode of a test set; and (5) obtaining a retrieval result. According to the invention, an objective function is constructed by using a dictionary pair thought, a coefficient embedding matrix with smaller modal difference is obtained by changing the data of different modals through a dictionary pair, so that the problem that the data of different modals is directly mapped to the Hamming space is solved, the problem of isomerism of the data of different modals can be better solved, and the method has higher average retrieval precision when solving the problem of cross-modal retrieval.

Description

technical field [0001] The invention relates to the field of computer technology, and further relates to a hash cross-modal information retrieval method based on dictionary pairs in the field of information retrieval technology. The present invention can be used in existing information retrieval applications, including data of multiple modalities such as text and images, and realizes fast retrieval between data of different modalities. Background technique [0002] Multimodal data is becoming more and more common in daily life, and the demand for cross-modal retrieval is also increasing. For example, articles on social media often use images or videos to supplement text descriptions, and information with pictures and texts has become more and more popular. Generally, people hope to retrieve related images or videos through text. These data usually have the characteristics of a large amount of data. In order to achieve fast retrieval of cross-modal data, many studies now foc...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06F16/435G06F16/9536G06K9/62
CPCG06F16/435G06F16/9536G06F18/28G06F18/214G06F18/24
Inventor 王磊闵康凌李丹萍史凌峰
Owner XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products