Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Cross-modal retrieval method and device for multi-modal data, equipment and medium

A cross-modal, multi-modal technology, applied in equipment and computer-readable storage media, cross-modal retrieval method of multi-modal data, and device field, can solve the problem of inaccurate adjustment and cross-modal data of multi-modal data Problems such as the decrease in retrieval accuracy

Pending Publication Date: 2019-12-20
GUANGDONG UNIV OF TECH
View PDF6 Cites 10 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, there is not absolutely a corresponding relationship in the sample data. Therefore, according to the method of the prior art, the method of adjusting the network parameters of the deep neural network according to the ranking results by using the correlation ranking still has inaccurate adjustments, so that Leads to reduced accuracy for cross-modal retrieval of multimodal data

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Cross-modal retrieval method and device for multi-modal data, equipment and medium
  • Cross-modal retrieval method and device for multi-modal data, equipment and medium
  • Cross-modal retrieval method and device for multi-modal data, equipment and medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0052] The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some, not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

[0053] The core of the embodiment of the present invention is to provide a cross-modal retrieval method for multi-modal data, which can improve the accuracy of cross-modal retrieval of multi-modal data; another core of the present invention is to provide a multi-modal data retrieval method The cross-modal retrieval device, equipment and computer-readable storage medium of the present invention all have the above-mentioned beneficial effects.

[0054] In order ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a cross-modal retrieval method and device for multi-modal data, cross-modal retrieval equipment for multi-modal data, and a computer readable storage medium. The method comprises the steps: inputting training sample data of different modals into deep neural networks corresponding to all modals in batches, and obtaining the sample data features of all training sample data; respectively mapping each sample data feature into a common space, and calculating a corresponding loss function according to the intra-class low-rank loss constraint and semantic consistency constraint of each training sample data of different modes of the same class; adjusting network parameters of the deep neural network by using a loss function, and determining a target feature extraction model; and, after target data and to-be-retrieved data of different modes are obtained, calling the target feature extraction model to perform cross-modal retrieval operation, and then obtaining a retrieval sorting result of the to-be-retrieved data corresponding to the target data, so that the target feature extraction model can extract data features with higher quality, thereby improving the accuracyof cross-modal retrieval of the multi-modal data.

Description

technical field [0001] The present invention relates to the field of data retrieval, in particular to a cross-modal retrieval method, device, equipment and computer-readable storage medium for multi-modal data. Background technique [0002] With the continuous advancement of the information society, cross-modal retrieval of multi-modal data is more and more widely used in real life. For example, the image information corresponding to the description of the voice information is retrieved according to a piece of voice information; or the voice information corresponding to the description of the text information is retrieved according to the text information. [0003] In the process of cross-modal retrieval, it is necessary to obtain the common data characteristics of cross-modal data so that they can be compared directly. The quality of data features extracted from multimodal data using deep neural networks will directly affect the accuracy of cross-modal retrieval of multimo...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F16/2458G06N3/04
CPCG06F16/2458G06N3/04Y02D10/00
Inventor 刘文印康培培王崎林泽航徐凯杨振国
Owner GUANGDONG UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products