Few-sample cross-modal hash retrieval common representation learning method

A learning method and cross-modal technology, applied in neural learning methods, unstructured text data retrieval, multimedia data retrieval, etc., can solve problems such as inability to capture data correlation

Pending Publication Date: 2020-10-09
SUN YAT SEN UNIV
View PDF1 Cites 14 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, this patent fails to effectively capture the correlation of data and extract representative common representations

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Few-sample cross-modal hash retrieval common representation learning method
  • Few-sample cross-modal hash retrieval common representation learning method
  • Few-sample cross-modal hash retrieval common representation learning method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0055] The accompanying drawings are for illustrative purposes only and cannot be construed as limiting the patent;

[0056] In order to better illustrate this embodiment, some parts in the drawings will be omitted, enlarged or reduced, and do not represent the size of the actual product;

[0057] It is understood by those skilled in the art that certain known structures and descriptions thereof may be omitted in the drawings.

[0058] The technical solutions of the present invention will be further described below in conjunction with the accompanying drawings and embodiments.

[0059] Such as figure 1 As shown, a few-shot cross-modal hash retrieval common representation learning method includes the following steps:

[0060] S1: Divide the dataset and preprocess the original image and text data;

[0061] S2: Establish two parallel deep network structures, respectively extracting feature representations from preprocessed images and texts;

[0062] S3: Establish a hash layer...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a few-sample cross-modal hash retrieval common representation learning method. According to the method, a oneself knowing-adversary knowing network is designed, mainly relates to two modules: a oneself knowing module and an adversary knowing module. The oneself knowing module can fully utilize the information hidden in the data itself, fuse the features of different levels,and extract more global features; on the basis of the oneself knowing modules, the adversary knowing module carries out modeling on the correlation of all the samples, and the nonlinear dependence relationship between the data is captured, so that the common representation of different modal data can be better learned. And finally, a loss function for maintaining intra-modal and inter-modal similarity is established, and training optimization is carried out on the network. According to the method, the problem of data imbalance under the condition of few samples can be effectively solved, and more representative common representation can be learned, so that the cross-modal retrieval precision is greatly improved.

Description

technical field [0001] The invention relates to the field of computer vision information retrieval, and more specifically, to a common representation learning method for cross-modal hash retrieval with few samples. Background technique [0002] The increasing number of data in different modalities on the Internet makes cross-modal retrieval more and more widely used. Cross-modal retrieval refers to using data of one modality as a query item, searching on a database composed of data of another modality, and returning similar data. Since images and texts are the two most common types of multimedia data, in addition, hashing methods map high-dimensional data into low-dimensional binary codes, which can improve retrieval speed and save storage space, so only hashing across images and texts is discussed search. [0003] In recent years, the academic community has proposed a variety of cross-modal hash retrieval algorithms based on deep learning, and achieved good retrieval perf...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06F16/9535G06F16/9538G06F16/435G06F16/438G06F16/45G06F16/31G06F16/338G06F16/35G06F16/538G06F16/55G06N3/04G06N3/08
CPCG06F16/9535G06F16/9538G06F16/435G06F16/438G06F16/45G06F16/313G06F16/338G06F16/355G06F16/325G06F16/538G06F16/55G06N3/084G06N3/045
Inventor 王少英赖韩江
Owner SUN YAT SEN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products