A Cross-Modal Retrieval Method Based on Topic Model

A topic model and cross-modal technology, applied in the field of cross-modal retrieval, can solve the problems of lack of internal mechanism analysis of cross-modal data, implicit space not interpretable, etc.

Inactive Publication Date: 2017-06-23
ZHEJIANG UNIV
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, these mapping-based methods are completely dependent on the statistical characteristics of the data, and the analysis of the internal mechanism of cross-modal data is relatively lacking, and the hidden space obtained by learning is not very interpretable.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Cross-Modal Retrieval Method Based on Topic Model
  • A Cross-Modal Retrieval Method Based on Topic Model
  • A Cross-Modal Retrieval Method Based on Topic Model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0100] In order to verify the effect of the present invention, use the webpage of "Wikipedia-featured text" (Wikipedia featurearticles), each webpage contains an image and several sections of text describing the content of the image to form a cross-modal document, these cross-modal documents The modal document data is used as the data set of the experiment of the present invention (as attached figure 2 ). Here, the data set contains two modalities of text and image, the size of the thesaurus dictionary of the text is set to 5000 dimensions, and the number of cluster center points of the image is set to 1000. The whole dataset is divided into 10 categories. The database contains a total of 2866 cross-media documents, 1 / 5 of which are randomly selected for testing, and the other documents are used as training data. According to the steps described in the specific embodiment, the experimental results obtained are as follows:

[0101] Table 1. Results on the Wikipedia dataset ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a cross-modal searching method based on a topic model. The cross-modal searching method based on the topic model comprises the following steps that (1) feature extracting and label recording are conducted on all types of modal data in a database, (2) a cross-modal searching image model based on the topic model is built, (3) the cross-modal searching image model based on the topic model is solved by using a Gibbs sampling method, (4) a user submits data of a type of modal, and another relative type of modal data are returned by using the cross-modal searching model after feature extracting, and (5) the real corresponding information and the label information of the cross-modal data are used, and the cross-modal searching model is assessed from two aspects of correspondence and differentiation. The cross-modal searching method based on the topic model introduces the cross-modal topic and the different modal topic improving concept, the labor information is used, the interpretability and the flexibility of the topic modeling are improved and the cross-modal searching method based on the topic model has a good expandability and discrimination performance.

Description

technical field [0001] The invention relates to cross-modal retrieval, which is a cross-modal retrieval method based on topic model. Background technique [0002] Nowadays, various types of data widely exist on the Internet, such as text, image, sound and geographic location data, etc. The same semantic content is often expressed through different types of data, so cross-media retrieval has become a requirement. For example, retrieve images related to the semantics contained in the text according to the text, or retrieve text news reports related to the image according to the image. [0003] Most of the existing retrieval methods target a single type of media data, such as text retrieval for text or image retrieval for images. Recently, several cross-modal retrieval methods have also appeared, but most of these cross-modal retrieval methods first calculate the similarity between the data of the same modality, and then use the known correspondence between different types of...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06F17/30
CPCG06F16/95
Inventor 庄越挺吴飞李玺王彦斐宋骏
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products