Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Cross-modal hash retrieval method based on mapping dictionary learning

A dictionary learning, cross-modal technology, applied in the field of cross-modal hash retrieval based on mapping dictionary learning, can solve problems such as limiting algorithm application, learning hash functions, and uneven distribution of hash codes.

Active Publication Date: 2020-04-03
LUDONG UNIVERSITY
View PDF10 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, such algorithms usually have the following problems, which limit the application of the algorithm
1) In the dictionary learning algorithm, due to the existence of sparse constraint items, the complexity of the training and testing process algorithm is high
2) These hash algorithms do not learn hash functions for each modality
3) The sample representation is sparse, resulting in uneven distribution of -1 and 1 of the hash code
However, these algorithms do not consider the impact of quantization loss on algorithm performance

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Cross-modal hash retrieval method based on mapping dictionary learning
  • Cross-modal hash retrieval method based on mapping dictionary learning
  • Cross-modal hash retrieval method based on mapping dictionary learning

Examples

Experimental program
Comparison scheme
Effect test

specific Embodiment approach

[0067] The specific embodiment: the specific embodiment of the present invention is described in detail below in conjunction with accompanying drawing:

[0068] Although the present invention specifies two modalities, image and text, the algorithm can be easily extended to other modalities and situations with more than two modalities. For convenience of description, the present invention only considers two modes of image and text.

[0069] see figure 1 , a cross-modal hash retrieval method based on mapping dictionary learning, which implements the following steps through a computer device:

[0070] Step S1, collect image and text samples through the network, and establish a cross-media retrieval image and text data set, and divide the image and text data set into a training set and a test set;

[0071] The step S1 includes collecting images and text samples from social networking, shopping and other websites on the Internet, and forming image and text sample pairs from image...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a mapping dictionary learning based cross-modality hashing search method. The mapping dictionary learning based cross-modality hashing search method includes: collecting image samples and text samples through a network, establishing a cross-media search data set, and dividing the data set into a training set and a test set; extracting features of the images and the texts through a BOW algorithm; learning a sharing sub space for an image mode and a text mode through mapping dictionary learning, and learning hashing functions of each mode at the same time; minimizing quantization errors by learning an orthogonal rotation matrix; calculating hashing codes of test samples through the hashing functions of the image mode and the text mode and the orthogonal rotation matrix; and using test samples of one mode as query samples, using the training set of the other mode as a searched data set, calculating a Hamming distance between the query samples and the searched samples and performing sequencing, and returning samples on top. The search method is high in search accurate rate, can be easily applied to large-scale data sets, is simple to implement, and has wide application prospect and huge market value.

Description

Technical field: [0001] The invention relates to a cross-modal hash retrieval method, in particular to a cross-modal hash retrieval method based on mapping dictionary learning. Background technique: [0002] With the rapid development of computer networks and information technology, the amount of media data on the network has increased dramatically, and the representation of media has also shown multi-modality (image, text, sound, video, etc.). For example: when uploading a photo on Weibo, a paragraph of text is usually uploaded to describe the content of the photo or the content of the image is marked with some tags; When describing product information, both pictures and words are usually used. Although these multimodal data have different representation forms, there are semantic associations between them. The purpose of cross-media retrieval is to mine the semantic relationship between different media, sort according to the semantic relationship, and return data of diffe...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F16/953G06N20/00
CPCG06F16/951G06N20/00
Inventor 姚涛孔祥维付海燕
Owner LUDONG UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products