Image-text cross-modal hash retrieval method based on large-batch training

A large-batch, cross-modal technology, applied in the field of cross-modal retrieval, can solve the problems of small batch training time is long, the gradient is not good enough, and the number of samples obtained is limited, so as to avoid gradient disappearance or explosion, speed up training, improve The effect of precision

Active Publication Date: 2020-05-29
CHONGQING UNIV OF POSTS & TELECOMM
View PDF4 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] In view of this, the purpose of the present invention is to provide a method for cross-modal hash retrieval of images and texts based on large-scale training, which is used to solve the existing cross-modal hash retrieval method based on deep learning, especially based on triplets The small batch training time of the deep cross-modal hashing method is long, the number of samples obtained is limited, and the gradient is not good enough to affect the retrieval performance

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image-text cross-modal hash retrieval method based on large-batch training
  • Image-text cross-modal hash retrieval method based on large-batch training
  • Image-text cross-modal hash retrieval method based on large-batch training

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0056] Embodiments of the present invention are described below through specific examples, and those skilled in the art can easily understand other advantages and effects of the present invention from the content disclosed in this specification. The present invention can also be implemented or applied through other different specific implementation modes, and various modifications or changes can be made to the details in this specification based on different viewpoints and applications without departing from the spirit of the present invention. It should be noted that the diagrams provided in the following embodiments are only schematically illustrating the basic concept of the present invention, and the following embodiments and the features in the embodiments can be combined with each other in the case of no conflict.

[0057] Wherein, the accompanying drawings are for illustrative purposes only, and represent only schematic diagrams, rather than physical drawings, and should...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to an image-text cross-modal hash retrieval method based on large-batch training, and belongs to the field of cross-modal retrieval, and is used for solving the problems that theexisting deep learning-based cross-modal hash retrieval method, particularly a triple-based deep cross-modal hash method, is long in small-batch training time, limited in obtained sample number and not good enough in gradient, so that the retrieval performance is influenced. The method comprises the following steps: preprocessing an image and text data; carrying out hash code mapping; establishing a target loss function L; inputting the triple data training model in a large-batch manner; and performing cross-modal hash retrieval by using the trained model. According to the scheme provided bythe invention, the triple data is input in a large-batch manner for training, so that the time of each round of training is shortened; because more training samples exist when the parameters are updated each time, a better gradient can be obtained, orthogonal regularization is used for the weight, the gradient can be kept during gradient transmission, model training is more stable, and the retrieval accuracy is improved.

Description

technical field [0001] The invention belongs to the field of cross-modal retrieval, and relates to a graphic-text cross-modal hash retrieval method based on mass training. Background technique [0002] With the rapid development of the Internet and multimedia technology, a large amount of multimedia data in different modalities has been generated, such as images, texts, videos, etc. Data of different modalities can be used to describe the same thing, and displaying information from multiple perspectives can help users gain a comprehensive understanding of the thing. With the rapid growth of multimedia data of different modalities, cross-modal retrieval has become a research hotspot. The key to cross-modal retrieval is to model the relationship between multimedia data of different modalities. The main difficulty is that there is a heterogeneity gap in multimedia data of different modalities, and direct comparison cannot be made. [0003] The cross-modal hashing method can e...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06F16/432
CPCG06F16/432Y02D10/00
Inventor 张学旺周印林金朝叶财金黄胜
Owner CHONGQING UNIV OF POSTS & TELECOMM
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products