Cross-modal search method capable of directly measuring similarity of different modal data

A cross-modal, similarity technology, applied in the direction of electrical digital data processing, special data processing applications, instruments, etc., can solve problems such as unsatisfactory query results, inability to learn, and high dimensionality

Active Publication Date: 2014-01-01
ZHEJIANG UNIV
View PDF1 Cites 28 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, existing retrieval methods are generally aimed at unimodal data, such as text retrieval for text, image retrieval for image
There are also some multimodal or multimedia retrieval methods, but most of these multimodal retrieval methods measure the similarity between the same modalities, and then calculate the similarity between cross-media data through function mapping, and rarely directly compare the differences. Modal Similarity Retrieval Method
The shortcoming of the cross-media retrieval method that measures

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Cross-modal search method capable of directly measuring similarity of different modal data
  • Cross-modal search method capable of directly measuring similarity of different modal data
  • Cross-modal search method capable of directly measuring similarity of different modal data

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0092] Suppose we have 2173 pairs of text and image data with known correspondence, and 693 pairs of text data and image data with unknown correspondence. Examples of pictures and text are as follows figure 2 . First, SIFT features are extracted for all image modality data in the database, and the k-means method is used to cluster to form visual words, and then the features are normalized so that the feature vector representing each image is a unit vector. At the same time, perform part-of-speech tagging on all text modal data in the database, remove non-noun words, retain nouns in the text, use all words that have appeared in the database to form a thesaurus, and count the occurrence of words in the thesaurus separately for each text The number of times, using single-text vocabulary frequency for vectorization, and then normalizing the feature vector, so that the feature vector representing each text is a unit vector.

[0093] Express the paired 2173 pairs of data (features...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a cross-modal search method capable of directly measuring similarity of different modal data. The method includes the steps of firstly, feature extracting; secondly, model building and learning; thirdly, cross-media data search; fourthly, result evaluating. By the method compared with traditional cross-media search methods, similarity comparison of different modal data can be performed directly, for cross-modal search tasks, a user can submit texts, images, sounds and the like of optional modals so as to search required corresponding modal results, requirements of cross-media search are satisfied, and search intensions of a user can be achieved more directly. Compared with other cross-media search algorithms capable of directly measuring similarity of different modals, the method is high in noise interference resistance and expression capacity of loosely-related cross-modal data, and better search results can be achieved.

Description

technical field [0001] The invention relates to cross-modal retrieval, in particular to a cross-modal retrieval method that can directly measure the similarity between different modal data. Background technique [0002] Nowadays, the types of electronic data tend to be more and more colorful, and various types of data, such as text, image, sound, map, etc., exist widely on the Internet. Often the same semantic content can be described by data of one modality, or by data of other modalities. Sometimes, for the description of one type of data with a certain semantic, we hope to find the corresponding description of other types of data. For example, search for pictures with similar meanings to the text based on the text, or search for news reports related to the pictures based on the pictures. However, existing retrieval methods are generally aimed at unimodal data, such as text retrieval for text and image retrieval for images. There are also some multimodal or multimedia r...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06F17/30
CPCG06F16/9032
Inventor 庄越挺吴飞王彦斐汤斯亮邵健
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products