Incomplete cross modal retrieval method based on subspace learning

A subspace learning and subspace technology, applied in the field of cross-modal retrieval, can solve the problems of incomplete cross-modal retrieval, incomplete cross-modal retrieval methods, and cross-modal retrieval methods cannot be effectively satisfied, etc., to improve performance, Performance-enhancing effects

Active Publication Date: 2017-06-13
天津中科智能识别有限公司
View PDF4 Cites 13 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The purpose of the present invention is to solve the above technical problems and propose an incomplete cross-modal retrieval method based on s

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Incomplete cross modal retrieval method based on subspace learning
  • Incomplete cross modal retrieval method based on subspace learning
  • Incomplete cross modal retrieval method based on subspace learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0016] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be described in further detail below in conjunction with specific embodiments and with reference to the accompanying drawings.

[0017] see figure 1 As shown, an incomplete cross-modal retrieval method based on subspace learning includes the following steps:

[0018] Step S1, collecting multimodal data and extracting features of each different modality;

[0019] The multimodal data includes picture data and text data corresponding to the picture data, such as image tagging words;

[0020] The different modal features are generally visual description operators for image data, such as S IFT or GIST features; for text data, they are generally word frequency vectors of documents.

[0021] Step S2, using the features of each mode extracted in step S1 to construct an incomplete observation multi-modal data set;

[0022] The construction of the incompl...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an incomplete cross modal retrieval method based on subspace learning. The method comprises the steps of collecting multi-modal data and extracting features; establishing an incomplete observing multi-modal data set; learning shared subspace expressions of the incomplete observing multi-modal data set based on regression method; conducting feature learning on different modal data sets to select features with strong discrimination abilities; digging similarity relationships between incomplete observing modals and in the modals, and establishing an optimization objective function; obtaining the shared subspace expressions of multi-modal data and a projection matrix based on the regression method by the optimization objective function; conducting cross modal retrieval according to the projection matrix. The method solves the problem of modal feature heterogeneity, and at the same time can thoroughly utilize the data with complete modal and incomplete modal, and enhances the performance of the cross modal retrieval.

Description

technical field [0001] The invention relates to the technical field of cross-modal retrieval, in particular to an incomplete cross-modal retrieval method based on subspace learning. Background technique [0002] With the rapid development of multimedia technology, users share massive amounts of multimedia information, such as images, texts and videos, every day. Data with the same semantic meaning is often described by the above-mentioned multiple media features, such as a web page can be represented by text, pictures and hyperlinks. The explosive growth of the above multimedia data has greatly promoted the application requirements of cross-modal retrieval, such as retrieving images with text or retrieving text with images. Therefore, cross-modal retrieval has extremely important research and application value. [0003] Traditional cross-modal retrieval methods generally assume that each data point has a complete multimodal expression, such as a data set composed of web pa...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06F17/30
CPCG06F16/90335
Inventor 王亮吴书尹奇跃
Owner 天津中科智能识别有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products