Deep learning image target mapping and localization method based on weak supervision information

A technology of deep learning and positioning method, which is applied in the field of image processing, and can solve the problem of insufficient ability of feature points to represent original features.

Active Publication Date: 2022-03-18
PEKING UNIV
View PDF10 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The disadvantage of the above-mentioned image mapping method in the prior art is: this method of pooling the feature map using the method of calculating the global average value or the global maximum value will cause the feature points after pooling to have insufficient representation ability for the original features.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Deep learning image target mapping and localization method based on weak supervision information
  • Deep learning image target mapping and localization method based on weak supervision information
  • Deep learning image target mapping and localization method based on weak supervision information

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0036] Embodiments of the present invention are described in detail below, examples of which are shown in the drawings, wherein the same or similar reference numerals designate the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the figures are exemplary only for explaining the present invention and should not be construed as limiting the present invention.

[0037] Those skilled in the art will understand that unless otherwise stated, the singular forms "a", "an", "said" and "the" used herein may also include plural forms. It should be further understood that the word "comprising" used in the description of the present invention refers to the presence of said features, integers, steps, operations, elements and / or components, but does not exclude the presence or addition of one or more other features, Integers, steps, operations, elements, components, and / or groups thereof. It will be unders...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a deep learning image target mapping and positioning method based on weak supervision information. The method includes: using image data with category labels to train two deep convolutional neural network frameworks respectively to obtain a classification model M1 and a classification model M2, and obtain global parameterized learnable pooling layer parameters; use a new classification model M2 Perform feature extraction on the test image to obtain a feature map, and obtain a preliminary positioning frame through feature category mapping and threshold method according to the feature map; use the selective search method to extract candidate regions from the test image, and use the classification model M1 to screen out the candidate frame set; Perform non-maximum value suppression processing on the preliminary positioning frame and the candidate frame to obtain the final target positioning frame of the test image. The present invention introduces a global learnable pooling layer with parameters, which can learn better feature expressions about the target category j, and effectively obtain the position information of the target object in the image by using selective feature category mapping.

Description

technical field [0001] The invention relates to the technical field of image processing, in particular to a deep learning image target mapping and positioning method based on weakly supervised information. Background technique [0002] With the development of deep learning technology represented by deep convolutional neural network, great breakthroughs have been made in the fields of image classification and image object recognition, and many influential academic researches and related industrial applications have been triggered. In the 2015 Global Large-Scale Visual Recognition Competition (ILSVRC), the deep residual model proposed by Microsoft Research Asia won the championship with a recognition error rate of 3.57%, and surpassed human recognition accuracy for the first time. [0003] The Regional Convolutional Neural Network (RCNN) proposed in 2014 was the first to use a deep convolutional network for image target detection tasks, and its performance was significantly im...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/62G06N3/04G06N3/08G06V10/764G06V10/82
CPCG06N3/084G06V2201/07G06N3/048G06N3/045G06F18/214
Inventor 田永鸿李宗贤史业民曾炜王耀威
Owner PEKING UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products