Cross-modal retrieval method and device, computer equipment and storage medium

A cross-modal and modal technology, applied in the field of multi-modal data retrieval, can solve the problems of low cross-modal retrieval accuracy, simple feature extraction, and loss of feature information.

Active Publication Date: 2019-05-21
XIDIAN UNIV
View PDF4 Cites 34 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] It can be seen that the feature extraction methods of image and text data in the prior art are based on traditional algorithms, and the feature extraction is too simple, resulting in the loss of some feature information and the low accuracy of cross-modal retrieval

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Cross-modal retrieval method and device, computer equipment and storage medium
  • Cross-modal retrieval method and device, computer equipment and storage medium
  • Cross-modal retrieval method and device, computer equipment and storage medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0029] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be described in further detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.

[0030] It can be understood that the terms "first", "second", etc. used in the present application may be used to describe various elements herein, but unless otherwise specified, these elements are not limited by these terms. These terms are only used to distinguish one element from another element. For example, a first xx script could be termed a second xx script, and, similarly, a second xx script could be termed a first xx script, without departing from the scope of the present application.

[0031] figure 1 It is an application environment diagram of a cross-modal retrieval method provided i...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to the technical field of multi-modal data retrieval, in particular to a cross-modal retrieval method and device, computer equipment and a storage medium. The method comprises the steps of obtaining first modal to-be-matched data, wherein the first modal to-be-matched data comprises image data and text data; When the first modal to-be-matched data is image data, carrying outfeature vector extraction by using a deep residual network ResNet model, and when the first modal to-be-matched data is text data, carrying out feature vector extraction by using a variational auto-encoder model; Mapping the feature vector to a common representation space by using a preset mapping function; And calculating the similarity between the first modal to-be-matched data and the second modal matching data in the common representation space, and outputting the corresponding second modal matching data according to the similarity to complete cross-modal retrieval. Characteristics of dataare extracted more fully, and retrieval accuracy is improved.

Description

technical field [0001] The present invention relates to the technical field of multimodal data retrieval, in particular to a cross-modal retrieval method, device, computer equipment and storage medium. Background technique [0002] In recent years, with the rapid development of deep learning technology and the rapid growth of multi-modal data, people began to try to combine two relatively independent fields of computer vision and natural language processing for research to realize joint embedding of visual semantics. This task needs to represent image and text data as a fixed-length vector, and then embed them into the same vector space. Cross-modal retrieval is a typical application of visual-semantic joint embedding. Now data such as text, pictures and audio are showing exponential growth, and information carriers are becoming more and more diversified. People hope to be able to perform information retrieval among different information carriers. Most of the existing info...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06F16/43G06N3/04
Inventor 宋彬姚继鹏郭洁罗文雯
Owner XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products