A cross-modal retrieval method that can directly measure the similarity between different modal data

A cross-modal, similarity technology, applied in the direction of electrical digital data processing, special data processing applications, instruments, etc., can solve the retrieval method of few different modal similarities, high dimensionality, and large difference in modal data And other issues

Active Publication Date: 2016-09-28
ZHEJIANG UNIV
View PDF1 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, existing retrieval methods are generally aimed at unimodal data, such as text retrieval for text, image retrieval for image
There are also some multimodal or multimedia retrieval methods, but most of these multimodal retrieval methods measure the similarity between the same modalities, and then calculate the similarity between cross-media data through function mapping, and rarely directly compare the differences. Modal Similarity Retrieval Method
The shortcoming of the cross-media retrieval method that measures the similarity between the same modalities is that it cannot learn the relationship between cross-modal data, and needs to rely on the pre-specified matching relationship in the database, and it is loose for multimedia data. The corresponding relationship, the query effect is not ideal
The difficulty of directly comparing the similarity of different modal data is that the characteristics of different modal data are quite different, and generally speaking, the dimension is high, and there is a problem of "semantic gap"

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A cross-modal retrieval method that can directly measure the similarity between different modal data
  • A cross-modal retrieval method that can directly measure the similarity between different modal data
  • A cross-modal retrieval method that can directly measure the similarity between different modal data

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0092] Suppose we have 2173 pairs of text and image data with known correspondence, and 693 pairs of text data and image data with unknown correspondence. Examples of pictures and text are as follows figure 2 . First, SIFT features are extracted for all image modality data in the database, and the k-means method is used to cluster to form visual words, and then the features are normalized so that the feature vector representing each image is a unit vector. At the same time, perform part-of-speech tagging on all text modal data in the database, remove non-noun words, retain nouns in the text, use all words that have appeared in the database to form a thesaurus, and count the occurrence of words in the thesaurus separately for each text The number of times, using single-text vocabulary frequency for vectorization, and then normalizing the feature vector, so that the feature vector representing each text is a unit vector.

[0093] Express the paired 2173 pairs of data (features...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a cross-modal retrieval method that can directly measure the similarity between different modal data. It includes the following steps: 1) feature extraction; 2) model building and learning; 3) cross-media data retrieval; 4) result evaluation. The present invention can directly perform similarity comparison between different modal data, and for cross-modal retrieval tasks, users can submit text, images, sounds, etc. of any modal to retrieve the corresponding modal results they need. The difference between the present invention and the traditional cross-media retrieval method is that the similarity comparison between different modal data can be directly performed, which meets the requirements of cross-media retrieval and realizes the user's retrieval intention more directly. Compared with cross-media retrieval algorithms based on modal similarity, this method has strong anti-noise ability and the ability to express loosely related cross-modal data, which makes the retrieval effect better.

Description

technical field [0001] The invention relates to cross-modal retrieval, in particular to a cross-modal retrieval method that can directly measure the similarity between different modal data. Background technique [0002] Nowadays, the types of electronic data tend to be more and more colorful, and various types of data, such as text, image, sound, map, etc., exist widely on the Internet. Often the same semantic content can be described by data of one modality, or by data of other modalities. Sometimes, for the description of one type of data with a certain semantic, we hope to find the corresponding description of other types of data. For example, search for pictures with similar meanings to the text based on the text, or search for news reports related to the pictures based on the pictures. However, existing retrieval methods are generally aimed at unimodal data, such as text retrieval for text and image retrieval for images. There are also some multimodal or multimedia r...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06F17/30
CPCG06F16/9032
Inventor 庄越挺吴飞王彦斐汤斯亮邵健
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products