Image object co-segmentation method guided by local shape migration

A technology of image object and co-segmentation, which is applied in the field of image processing and computer vision, can solve problems that affect the segmentation results, achieve high execution time and space efficiency, and solve the effect of poor segmentation results

Active Publication Date: 2017-05-10
BEIHANG UNIV
View PDF5 Cites 16 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The co-segmentation method based on region matching has achieved satisfactory results on public datasets, but when the appearance of foreground objects changes greatly, it will greatly affect the final segmentation results

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image object co-segmentation method guided by local shape migration
  • Image object co-segmentation method guided by local shape migration
  • Image object co-segmentation method guided by local shape migration

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0036] Such as figure 1 As shown, the present invention proposes an image object co-segmentation method guided by local shape transfer, comprising the following steps:

[0037] (1) Image set preprocessing: Input M images containing objects of the same semantic category, use the saliency detection method proposed by Zhang et al. in 2015 to analyze the saliency of each image, and use the double mean of the saliency detection results Do the threshold to get the mask map, which is the initial segmentation result of the foreground and background, where the mask map is only composed of 0 and 1, 1 represents the foreground pixel point, and 0 represents the background pixel point.

[0038] (2) Perform dense feature point matching on any two images: for each image i (i=1,2,...,M), generate a 128-dimensional dense sift feature for each pixel on the image; The dense sift feature of picture i (i=1,2,...,M) and the dense sift feature of other pictures j (j=1,2,...,M and j≠i) adopt Kim et ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to an image object co-segmentation method guided by local shape migration, comprising the following steps: inputting M images containing objects of the same semantic category, making a saliency analysis of each image, and generating a foreground-background initial segmentation result; carrying out dense feature point matching for any two images; establishing the corresponding relationship between each local image area and local areas of other images according to the matching result; learning the weight of the corresponding relationship using a local linear structure maintenance algorithm; and using an iterative algorithm to transfer the foreground-background segmentation result between the corresponding local areas to get a final segmentation result. The method has good performance in co-segmentation of image objects of the same semantic category, and can be applied to fields such as image content understanding and image object recognition.

Description

technical field [0001] The invention belongs to the technical fields of image processing and computer vision, and relates to an image object co-segmentation method guided by local shape transfer. Background technique [0002] Given an image set containing objects of the same semantic category, image object co-segmentation technology mainly considers how to segment common objects from it, so as to carry out higher-level visual understanding tasks such as image content understanding and object detection. In 2006, Rother et al. first proposed the concept of image object co-segmentation, using a generative model to perform object foreground and background segmentation on image pairs containing the same category. This method uses a Gaussian model to generate a potential foreground histogram, and image pairs in The difference of the foreground histogram is added as a global constraint to the energy based on the Markov random field, and finally the TRGC optimization algorithm is us...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/46G06T7/11G06T7/136
CPCG06T2207/20081G06T2207/20016G06T2207/10004G06V10/462
Inventor 陈小武滕炜张宇李甲赵沁平
Owner BEIHANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products