Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Cross-modal retrieval method based on label fine-grained self-supervision

A cross-modal, fine-grained technology, applied in the field of cross-modal retrieval, can solve the problems of difficult measurement of cross-modal similarity, different modal distribution and inconsistent representation, etc.

Inactive Publication Date: 2021-09-03
BEIJING UNIV OF POSTS & TELECOMM +1
View PDF2 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The key problem of cross-modal retrieval is that the distribution and representation of different modalities are inconsistent, and this heterogeneity gap makes it difficult to measure cross-modal similarity.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Cross-modal retrieval method based on label fine-grained self-supervision
  • Cross-modal retrieval method based on label fine-grained self-supervision
  • Cross-modal retrieval method based on label fine-grained self-supervision

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0018] The technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some, not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

[0019] Such as figure 1 As shown, it is a flowchart of a cross-modal retrieval method based on label fine-grained self-supervision according to an embodiment of the present invention. The method includes the following steps:

[0020] S101: Construct feature extraction network

[0021] For the two modalities in the database, a feature extraction network is constructed respectively, and a label semantic extraction network is constructed for labels. Extract the i...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention designs a cross-modal retrieval method based on label fine-grained self-supervision. The method adds a self-supervision semantic network combined with multi-label information on the basis of independently extracting features from two modals and directly converting the features into hash codes (DCMH method). Fine-grained information carried by a label is used for self-supervising the Hash learning process of the two modals, the distribution difference of the two modals is found by establishing two discriminators of an image form and a text form, and a model obtained through training of the method can well carry out Hash code representation capable of measuring the similarity on the image-text modals. Therefore, the function of retrieving the data of another modal by using the data of one modal is realized.

Description

technical field [0001] The invention relates to cross-modal retrieval, which is a cross-modal retrieval method based on label fine-grained self-supervision. Background technique [0002] Cross-model retrieval is the use of one modality (such as image) to retrieve semantically related data queries in another modality (such as text), and has received extensive attention in recent years. With the development of big data, unimodal retrieval such as image retrieval has been widely used in daily life, but the modal difference of heterogeneous data makes cross-modal retrieval still a challenging task. Single-modal retrieval provides retrieval results that are consistent with the query modality for the query data. Its main limitation is that the retrieval results must have the same modality combination as the user query, for example, searching for text by text, searching for image by image, etc. Unimodal retrieval cannot directly measure the similarity between different modalities,...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F16/31G06F16/51G06K9/62
CPCG06F16/325G06F16/51G06F18/22G06F18/214
Inventor 赵海英
Owner BEIJING UNIV OF POSTS & TELECOMM
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products