Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Image-text mutual retrieval method based on complementary semantic alignment and symmetric retrieval

A text and image technology, applied in the direction of still image data retrieval, metadata still image retrieval, character and pattern recognition, etc., can solve the problems of image feature and text feature deviation, bidirectional retrieval result asymmetry, affecting bidirectional retrieval accuracy, etc. Achieve the effect of improving mutual retrieval accuracy and reducing errors

Inactive Publication Date: 2019-01-22
XIDIAN UNIV
View PDF8 Cites 34 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] However, the problem in the existing technology is: the image features used in the existing methods only contain the target information of the image, ignoring the scene context information of the image, while the text features include both the target information and the scene context information, so with Image features and text features will have a large deviation when the embedding space is aligned
In addition, because the information in the text is highly condensed semantic information, and the image features contain richer semantic information, there will be a problem of asymmetric bidirectional retrieval results during retrieval. For example, a picture retrieved The first k sentences of these sentences, when reversely retrieving pictures for these sentences, the pictures may not appear in the first k retrieval results of the sentence, which will affect the accuracy of two-way retrieval

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image-text mutual retrieval method based on complementary semantic alignment and symmetric retrieval
  • Image-text mutual retrieval method based on complementary semantic alignment and symmetric retrieval
  • Image-text mutual retrieval method based on complementary semantic alignment and symmetric retrieval

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0035] In order to make the object, technical solution and advantages of the present invention more clear, the present invention will be further described in detail below in conjunction with the examples. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.

[0036] The invention aims to solve the problem that image features contain incomplete information and lose scene context information; there is an asymmetry in the proximity relationship between the features of two different modalities during cross-modal retrieval; Perform reordering to obtain the final retrieval level sorted list.

[0037] The application principle of the present invention will be described in detail below in conjunction with the accompanying drawings.

[0038] Such as figure 1 As shown, the image-text mutual retrieval method based on complementary semantic alignment and symmetric retrieval provided by th...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention belongs to the technical field of computer vision and natural language processing, and discloses an image-text mutual retrieval method based on complementary semantic alignment and symmetric retrieval, comprising: using convolution neural network to extract the depth visual features of images; Using the model of object-based convolutional neural network and scene-based convolutionalneural network to extract depth visual features to ensure that the visual features contain multiple complementary semantic information of the object and the scene; encoding the text by using short-term and long-term memory network, and extracting the corresponding semantic features. mapping visual features and text features into the same cross-modal embedding space by using two mapping matrices; Using the k-nearest neighbor method, retrieving the initial list in the cross-modal embedding space. Using the neighborhood relation of symmetrical bi-directional retrieval based on mutual nearest neighbor method, the initial retrieval list is reordered and the final retrieval level list is obtained. The invention has the advantages of high accuracy.

Description

technical field [0001] The invention belongs to the technical field of computer vision and natural language processing, and in particular relates to an image-text mutual retrieval method based on complementary semantic alignment and symmetrical retrieval. Background technique [0002] At present, the existing technology commonly used in the industry is as follows: the image-semantic description mutual retrieval task aims to retrieve a related text description sentence in the text library given a search image, or given a text description in the image library The corresponding image is retrieved from . It has important practical significance, such as helping the blind to "see" the world; in addition, this task is also regarded as a major challenge in image understanding and is a core issue in computer vision. Therefore, the image-semantic description mutual retrieval task is one of the hottest researches in the field of computer vision and natural language processing. At pre...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F16/58G06K9/62
CPCG06F18/24147
Inventor 田春娜姜萌萌高新波刘恒张相南王秀美
Owner XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products