Fine-grained image weak supervision target positioning method based on deep learning

A target positioning and deep learning technology, applied in the field of image-text target positioning in deep learning, can solve the problem of ignoring the fine-grained relationship between images and language descriptions, and achieve the effect of solving weakly supervised target positioning

Pending Publication Date: 2020-08-28
BEIJING UNIV OF TECH
View PDF6 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, these methods are only related to the matching of a single vector space, while ignoring the fine-gr

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Fine-grained image weak supervision target positioning method based on deep learning
  • Fine-grained image weak supervision target positioning method based on deep learning
  • Fine-grained image weak supervision target positioning method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0022] The technical solution of the present invention will be further described below in conjunction with the accompanying drawings. figure 1 It is the overall flowchart of the method involved in the present invention.

[0023] Step 1, divide the dataset

[0024] The database in the implementation process of the method of the present invention comes from the public standard data set CUB-200-2011, which contains 11,788 color pictures of birds. The data set has 200 categories, each with about 60 images. The data set is a multi-label data set, and each picture has a corresponding ten-sentence language description. The image data set is divided into two parts, one part is used as a test sample set for testing the effect, and the other part is used as a training sample set for training the network model.

[0025] Step 2: Build an image and language two-way network model

[0026] The structure of the image-language localization network model is a two-way structure, one way is us...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a fine-grained image weak supervision target positioning method based on deep learning. The fine-grained image weak supervision target positioning method is used for solving the problem that only weak supervision language description information easy to collect is used for recognizing and positioning a fine-grained image. According to the fine-grained image weak supervision target positioning method, inter-modal fine-grained semantic alignment is directly carried out on the pixel level of the image and the word described by the language; the image is input into a convolutional neural network to extract a feature vector, and the language description is encoded to extract the feature vector of the language description; and feature matching is performed on the convolution feature map and the language description feature vector, and the feature matching map is processed to obtain a saliency map of the target to obtain a final positioning result according to the feature matching map. According to the fine-grained image weak supervision target positioning method, weak supervision target positioning of the fine-grained image is realized under the condition of notneeding a strongly supervised annotated bounding box.

Description

technical field [0001] The invention relates to the technical field of image-text target positioning in deep learning, and the method is expected to quickly and accurately position targets on fine-grained image data sets. Background technique [0002] Exploring the correlation between images and their natural language has been an important research area in computer vision, which is closely related to bidirectional image and text retrieval, image annotation, visual question answering (VQA), image embedding and zero-shot learning. Humans use language concepts to describe the images they see, especially how to distinguish fine-grained images, so there is a strong correlation between images and their language descriptions. Object detection is also widely used in the image field, but many current localization methods rely heavily on expensive and hard-to-obtain strong supervised labels. However, images and their language descriptions exist widely in the real world and are very e...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/62G06N3/04G06N3/08
CPCG06N3/084G06N3/045G06F18/2411G06F18/22G06F18/29G06F18/253
Inventor 段立娟梁明亮恩擎乔元华
Owner BEIJING UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products