Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A method and system for cross-modal retrieval

A cross-modal and modal technology, applied in the field of cross-modal retrieval methods and systems, can solve problems such as difficulty in mining cross-modal complex information and missing modal relationships, and achieve the goal of improving accuracy and speed of retrieval Effect

Active Publication Date: 2019-01-11
SHENZHEN UNIV
View PDF5 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The technical problem to be solved by the present invention is to provide a cross-modal retrieval method and system, aiming to solve the problem of missing the interrelationships between the modalities and making it difficult to mine cross-modal complex information in the prior art when performing cross-modal retrieval question

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A method and system for cross-modal retrieval
  • A method and system for cross-modal retrieval
  • A method and system for cross-modal retrieval

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0042] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.

[0043] figure 1 A cross-modal retrieval method provided by an embodiment of the present invention is shown, including:

[0044] S101, respectively preprocessing the image and the text to obtain image features and text features;

[0045] S102. According to the image features and the text features, use a stacked restricted Boltzmann machine to extract the modality-friendly features of the image and the modality-friendly features of the text, and use multimodality A deep belief network extracts the modality mutual features of the image and the modality mutual features of the text;

[0046] S103, us...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention is applicable to the technical field of retrieval. A method for cross-modal retrieval is provided, which includes using a stacked restricted Boltzmann machine and a multi-modal depth confidence network to extract modal friendly features and modal mutual features of image and text respectively, wherein the modal friendly features can make the statistical characteristics between the obtained features more similar to the input, and modal mutual features can get the mutual information lost in the original input instances, and the two features can be fused to get the mixed features, and the final shared features can be obtained by multiple bimodal automatic coding. The embodiment of the invention utilizes a stacked restricted Boltzmann machine to extract the internal characteristics of each mode, and the mixed features suitable for cross-modal retrieval are constructed by fusing the lost mutual information between the two features in depth confidence network mining. The accuracy and retrieval speed of the cross-modal retrieval task are effectively improved by using the multi-layered and bi-modal automatic coding network mining the complex information of the cross-modal.

Description

technical field [0001] The invention belongs to the technical field of retrieval, and in particular relates to a cross-modal retrieval method and system. Background technique [0002] Cross-modal retrieval is a novel retrieval method that is capable of retrieving multimodal data. For example, input an image, retrieve the corresponding text in the text database; given the text, find the corresponding image in the image database. [0003] At present, the cross-modal retrieval method based on deep neural network mainly includes two steps: (1) extracting the internal features of each modality and the features between the modalities; (2) establishing the respective shared feature. However, in the first step, the mutual information between modalities is often lost; in the second step, current methods use relatively shallow networks, and it is difficult to mine complex information across modalities. Contents of the invention [0004] The technical problem to be solved by the p...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F16/53G06F16/58G06F16/33G06K9/62
CPCG06F18/253
Inventor 曹文明林秋斌
Owner SHENZHEN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products