Cross-modal retrieval method and system based on semantic condition association learning

A cross-modal, conditional technology, applied in the multimedia field, can solve the problem of lack of discriminative power of noise cross-modal implicit spatial representation

Inactive Publication Date: 2020-12-18
INST OF COMPUTING TECH CHINESE ACAD OF SCI
View PDF2 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The purpose of the present invention is to propose a cross-modal retrieval method based on semantic conditional associati...

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Cross-modal retrieval method and system based on semantic condition association learning
  • Cross-modal retrieval method and system based on semantic condition association learning
  • Cross-modal retrieval method and system based on semantic condition association learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0057] The present invention comprises following two key points:

[0058] Key point 1: Use label information to guide the deep feature learning of each modality data; in terms of technical effect, make each modality feature representation maintain multi-label semantic similarity, ensure the semantic discrimination of feature representation, and improve the cross-media retrieval effect.

[0059] Key point 2: Establish conditional correlations for modal feature representation and high-level semantic information; in terms of technical effects, effectively mine high-level semantic correlations between different modalities, reduce the impact of noise labels on cross-modal implicit representations, and improve cross-modal accuracy of state retrieval.

[0060] In order to make the above-mentioned features and effects of the present invention more clear and understandable, the following specific examples are given together with the accompanying drawings for detailed description as fol...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

According to the cross-modal retrieval method and system based on semantic condition association learning, the multi-label information serves as a new observation mode, and the multi-label semantic relation is effectively integrated into a cross-modal implicit representation learning framework based on a deep neural network. On one hand, the feature learning process of each mode is guided throughlabel semantic information, depth feature representation which keeps a semantic relation and has discrimination ability is obtained, and the cross-modal retrieval performance is improved; on the otherhand, the high-level semantics in the multi-label data are mined by using the deep network, and the typical correlation of different modal features with respect to the high-level semantics is maximized by using a conditional association learning method, so the shared semantic information can be eliminated from each modal data, and a direct association relationship between different modals is established. The method is advantaged in that the influence of the noise label on the cross-modal implicit representation is effectively reduced.

Description

technical field [0001] The invention relates to a cross-modal retrieval technology in the multimedia field, in particular to a high-level semantic conditional association learning technology for cross-modal data. Background technique [0002] Cross-modal retrieval technology is one of the important research topics in the field of multimedia, in order to facilitate users to obtain the required multi-modal information. Cross-modal retrieval can match a given modality data to semantically related data of another modality from massive multimedia information. Therefore, cross-modal retrieval technology needs to solve the problem of how to establish the relationship between heterogeneous modal content. [0003] At present, most cross-modal retrieval algorithms realize the relationship measurement among heterogeneous modalities by learning the common latent space of samples from different modalities. In order to maintain the semantic consistency of cross-modal latent spaces, exis...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06F16/48G06F16/45G06K9/62G06N3/04G06N3/08
CPCG06F16/48G06F16/45G06N3/08G06N3/045G06F18/241
Inventor 王树徽宋国利黄庆明
Owner INST OF COMPUTING TECH CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products