Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Self-paced cross-modal matching method based on subspace

A matching method and cross-modal technology, applied in the field of pattern recognition, can solve the problems of being unable to meet people's daily needs, manually labeling data time-consuming and laborious, and being unable to deal with unlabeled information, achieving good application prospects, improving robustness and accuracy sexual effect

Active Publication Date: 2016-09-07
天津中科智能识别有限公司
View PDF3 Cites 15 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Most of the traditional cross-modal matching methods are supervised learning methods. They all use semantic labels to reduce the gap between heterogeneous modalities, but they cannot deal with unlabeled information. Manually labeling data is a time-consuming and laborious work.
In addition, some unsupervised methods do not consider the judgment and correlation of features and the semantic similarity between samples, which cannot meet people's daily needs

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Self-paced cross-modal matching method based on subspace
  • Self-paced cross-modal matching method based on subspace
  • Self-paced cross-modal matching method based on subspace

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0014] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be described in further detail below in conjunction with specific embodiments and with reference to the accompanying drawings.

[0015] The present invention maps data of different modalities to the same subspace by learning two mapping matrices, and performs sample selection and feature learning while mapping, and uses multi-modal graph constraints to keep the data modal and modal The similarity between them; the similarity of data of different modalities is measured in the learned subspace to achieve cross-modal matching.

[0016] see figure 1 As shown, a subspace-based self-stepping mode matching method includes the following steps:

[0017] Step S1, collecting data samples of different modalities, establishing a cross-modal database, and dividing the cross-modal database into a training set and a test set;

[0018] It should be noted that the...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a self-paced cross-modal matching method based on subspace. The method is characterized by extracting feature vectors of different modals of data in a data set; obtaining different mapping matrixes corresponding to different modals in a training set through subspace self-paced learning, and mapping the different modal types of data samples in a test set to the same space through the mapping matrixes to enable the data in the training set and the test set to be mapped to a unified space; and then, measuring the similarity between search data and target data in the test set to obtain a cross-modal matching result The method can enable the different modals of data to be mapped to the unified space, and sample selection and feature learning are carried out while mapping, thereby improving matching robustness and accuracy.

Description

technical field [0001] The invention relates to the technical field of pattern recognition, in particular to a subspace-based self-stepping pattern matching method. Background technique [0002] Data in reality often have multiple modalities. For example, web page data includes both picture information and text information; video data includes audio information and picture information at the same time. The fundamental task of cross-modal matching is to use a modality as a query condition to match similar heterogeneous modality information. Most of the traditional cross-modal matching methods are supervised learning methods. They all use semantic labels to reduce the gap between heterogeneous modalities, but they cannot deal with unlabeled information, and manually labeling data is a time-consuming and laborious work. In addition, some unsupervised methods do not consider the judgment and correlation of features and the semantic similarity between samples, which cannot meet...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/62G06K9/46G06F17/30
CPCG06F16/90G06V10/462G06F18/22G06F18/214
Inventor 赫然孙哲南李志航梁坚曹冬
Owner 天津中科智能识别有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products