Salient object extraction method based on label semantic meaning

An object extraction and notable technology, applied in the direction of instruments, character and pattern recognition, computer components, etc., can solve the problems of not considering the label context, dependence, and not easy to generalize

Active Publication Date: 2018-04-27
BEIJING UNION UNIVERSITY
View PDF4 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The common disadvantage of these two documents is that the effect of saliency labeling depends on the region labeling, and the method of rel

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Salient object extraction method based on label semantic meaning
  • Salient object extraction method based on label semantic meaning
  • Salient object extraction method based on label semantic meaning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0068] like figure 1 As shown, the training process is as follows:

[0069] Execute step 100, input the training set, and perform the following operations on each image in the training set.

[0070] Execute step 110, carry out superpixel segmentation to image I;

[0071] Image I is segmented into M superpixels, and each superpixel is denoted as R i , 1≤i≤M.

[0072] Execute step 120 to extract visual features based on appearance of the image;

[0073] The appearance visual feature of the ith superpixel is v i , the feature on the feature channel of the kth dimension can be expressed as v i k .

[0074] Execute step 130 to perform saliency calculation based on image appearance features;

[0075] The saliency of the i-th superpixel on the feature channel of the k-th dimension is calculated as follows:

[0076]

[0077] Among them, D(v i k ,v j k ) represents the superpixel R i and superpixel R j The difference over the feature channels of the k-th dimension. w...

Embodiment 2

[0098] like figure 2 As shown, the test process is as follows:

[0099] Execute step 200, input an image I;

[0100] Execute step 210, carry out superpixel segmentation to image I;

[0101] Image I is segmented into M superpixels, and each superpixel is denoted as R i , 1≤i≤M.

[0102] Execute step 220 to extract the appearance-based features of the image;

[0103] The appearance visual feature of the ith superpixel is v i , the feature on the feature channel of the kth dimension can be expressed as v i k .

[0104] Execute step 230 to perform saliency calculation based on image appearance features;

[0105] The saliency of the i-th superpixel on the feature channel of the k-th dimension is calculated as follows:

[0106]

[0107] Among them, D(v i k ,v j k ) represents the superpixel R i and superpixel R j The difference over the feature channels of the k-th dimension. w ij Represents the spatial distance weight, calculated as

[0108]

[0109] p i s...

Embodiment 3

[0116] like image 3 As shown, it can be clearly seen how to get the final saliency map.

[0117] In the first step, the training process is performed first, and the images 300 of people and flowers in the picture collection are subjected to superpixel segmentation to obtain an image 310 . Image 310 is subjected to appearance feature extraction to obtain image 311 , and then image 311 is subjected to appearance saliency feature calculation to obtain image 312 . The image 310 is subjected to tag feature extraction to obtain an image 313 , and then the image 313 is subjected to salient feature calculation based on tag semantics to obtain an image 314 . The image 312 and the image 314 are simulated and trained together to obtain a weight vector 320 .

[0118] The second step is to carry out the testing process. Perform superpixel segmentation on the person image 330 to obtain an image 340 . Image 340 is subjected to appearance feature extraction to obtain image 341 , and then...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a salient object extraction method based on label semantic meaning, which comprises the steps of training, testing, and obtaining a final salient map. The step of training comprises the sub-steps of inputting a training set, and performing super pixel segmentation on an image I. According to the invention, firstly an object label in labels is selected, detection is performedthrough an object detector corresponding to the object label, salient features based on semantic meaning of the label are obtained, the label semantic meaning and the appearance based salient features are integrated to perform detection on a salient object. The label semantic information is advanced semantic information, so that the traditional salient object detection method can be better improved.

Description

technical field [0001] The invention relates to the technical field of digital image processing, in particular to a method for extracting salient objects based on label semantics. Background technique [0002] Although the semantics of labels has been widely used in the field of image annotation, label information is usually processed separately from salient object extraction tasks, and there are not many works applied to salient object extraction. [0003] Literature [Wen Wang, Congyan Lang, Songhe Feng. Contextualizing Tag Ranking and Saliency Detection for Social Images. Advances in Multimedia Modeling Lecture Notes in Computer Science Volume 7733, 2013, pp 428-435.] and literature [Zhu, G., Wang, Q .,Yuan,Y.Tag-saliency:Combining bottom-up and top-down information for saliency detection.Computer Vision and Image Understanding,2014,118(1):40-49.]Tags are used in these two papers semantic information. [0004] Literature [Wen Wang, Congyan Lang, Songhe Feng. Contextualiz...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/62G06K9/46G06K9/72
CPCG06V10/467G06V10/56G06V10/462G06V30/274G06F18/253G06F18/214
Inventor 梁晔
Owner BEIJING UNION UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products