Image text cross-modal retrieval method based on category information alignment

A category information and text technology, applied in the field of image-text cross-modal retrieval based on category information alignment, can solve the problem of insufficient retrieval accuracy of cross-modal retrieval methods

Active Publication Date: 2021-06-22
UNIV OF ELECTRONIC SCI & TECH OF CHINA
View PDF3 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

But the retrieval accuracy of this cross-

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image text cross-modal retrieval method based on category information alignment
  • Image text cross-modal retrieval method based on category information alignment
  • Image text cross-modal retrieval method based on category information alignment

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0065] Specific embodiments of the present invention will be described below in conjunction with the accompanying drawings, so that those skilled in the art can better understand the present invention. It should be noted that in the following description, when detailed descriptions of known functions and designs may dilute the main content of the present invention, these descriptions will be omitted here.

[0066] In the cross-modal retrieval based on deep learning, the most commonly used cross-modal retrieval is image and text. In the present invention, the image I, the corresponding text T, and the category information C are stored as an image-text pair instance in the training data set, so that N image-text pair instances constitute the training data set. The corresponding real image features (referred to as true image features) and real text features (referred to as true text features) can be expressed as In this embodiment, the true image features In order to utilize ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an image text cross-modal retrieval method based on category information alignment, and aims to keep distinguishing between different semantic category instances (image texts) and eliminate isomerism differences. In order to achieve the purpose, category information is innovatively introduced into a public representation space, namely an image text public space to minimize distinguishing loss, and cross-modal loss is introduced to align different modal information. In addition, a category information embedding method is adopted to generate false features instead of other methods marking information based on DNN; at the same time, modal invariance loss is minimized in a category public space to learn modal invariance features. Under the guidance of the learning strategy, pairwise similarity semantic information of image-text coupling items is fully utilized as much as possible, and it is guaranteed that learned representation has both the discrimination of a semantic structure and the cross-modal invariance.

Description

technical field [0001] The invention belongs to the technical field of image text cross-modal retrieval, and more specifically relates to an image text cross-modal retrieval method based on category information alignment. Background technique [0002] Cross-modal retrieval refers to the process of mutual retrieval of data of different modalities. The existing mainstream methods of cross-modal retrieval are divided into three types. [0003] The first is a cross-modal retrieval method based on basic subspace learning, which mainly learns projection matrices from paired datasets with the same semantic information, projects features of different modalities into a common latent subspace, and then Measures the similarity of different modalities in space. Such as canonical correlation analysis-based methods and kernel-based methods, learn linear projections or choose appropriate kernel functions to generate common representations by maximizing the pairwise correlations between t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06F16/432G06F16/48G06N3/04G06N3/08G06T9/00
CPCG06F16/434G06F16/48G06T9/002G06N3/08G06N3/045
Inventor 杨阳王威扬何仕远
Owner UNIV OF ELECTRONIC SCI & TECH OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products