Image-text cross-modal retrieval method based on joint features

A joint feature, cross-modal technology, applied in the field of image processing, to ensure that the overall semantics is not missing

Pending Publication Date: 2022-07-08
XIDIAN UNIV
View PDF2 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The purpose of the present invention is to address the deficiencies of the above-mentioned prior art, and propose a cross-modal image-text retrieval method based on joint features, aiming at balancing the global features and local features of images and texts, and reducing the problems caused by redundant information of global features. problem of influence

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image-text cross-modal retrieval method based on joint features
  • Image-text cross-modal retrieval method based on joint features
  • Image-text cross-modal retrieval method based on joint features

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0052] The present invention will be further described below with reference to the accompanying drawings and embodiments.

[0053] refer to figure 1 The implementation steps of the present invention are further described with the examples.

[0054] Step 1, generate training set and test set.

[0055] Step 1.1, select at least 10,000 natural images and their corresponding texts depicting the image content, each image has at least 5 sentences of text describing the image content, and the text can be in English or Chinese.

[0056] Step 1.2, traverse all the texts in the sample set, find out the nouns in each text, sort the number of occurrences of text nouns in the sample set from high to low, and select the 500 nouns with the highest number of occurrences to form a noun set. For each text in the sample set, define a semantic label. When the label is 1, it means that the text contains nouns in the noun set, and when the label value is 0, it means that the text does not contain...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an image and text mutual retrieval method based on joint features. The image and text mutual retrieval method can be used for mutual retrieval of natural images and texts. The method comprises the following implementation steps: 1, generating a training set; 2, constructing a deep learning network; 3, training the deep learning network; and 4, performing mutual retrieval on to-be-retrieved image texts. According to the method, the image joint feature processing sub-network based on the attention mechanism is adopted to reconstruct the global features of the image, the influence brought by redundant information in the global features of the image can be eliminated, and the features of the image and the text are constructed more accurately. According to the method, the global features and the local features of the image and the text are combined, the semantic relation between the image and the text is mined more deeply, the fine-grained local features can be fully utilized by the network, and it can be ensured that overall semantics are not lost.

Description

technical field [0001] The invention belongs to the technical field of image processing, and further relates to a cross-modal retrieval method for images and texts based on joint features in the technical field of intersection of natural language processing and computer vision. The invention can be used to mine the deep relationship between two different modes of image-text, extract image features and text features, calculate the similarity of image-text pairs by using the extracted features, and realize cross-modal retrieval of images and texts. Background technique [0002] There are currently two main methods for cross-modal retrieval of image and text. One is to build a network based on the global features of images and texts, and the other is to build a network based on local features of images and texts. Building a network based on global features of images and texts is for the entire image and the entire text. The main method is to build a deep learning model to extr...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06F16/53G06N3/04G06N3/08G06F40/30G06V20/20G06V10/764G06V30/19G06V10/74G06V10/774G06V10/82
CPCG06F16/53G06N3/08G06F40/30G06N3/045G06F18/22G06F18/24G06F18/214Y02D10/00
Inventor 高迪辉盛立杰苗启广
Owner XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products