Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Image object co-segmentation method guided by local shape transfer

A technology of image object and co-segmentation, which is applied in the field of computer vision and image processing, can solve problems such as affecting the segmentation results, achieve high execution time and space efficiency, and solve the effect of poor segmentation results

Active Publication Date: 2019-08-09
BEIHANG UNIV
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The co-segmentation method based on region matching has achieved satisfactory results on public datasets, but when the appearance of foreground objects changes greatly, it will greatly affect the final segmentation results

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image object co-segmentation method guided by local shape transfer
  • Image object co-segmentation method guided by local shape transfer
  • Image object co-segmentation method guided by local shape transfer

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0036] Such as figure 1 As shown, the present invention proposes an image object co-segmentation method guided by local shape transfer, comprising the following steps:

[0037] (1) Image set preprocessing: Input M images containing objects of the same semantic category, use the saliency detection method proposed by Zhang et al. in 2015 to analyze the saliency of each image, and use the double mean of the saliency detection results Do the threshold to get the mask map, which is the initial segmentation result of the foreground and background, where the mask map is only composed of 0 and 1, 1 represents the foreground pixel point, and 0 represents the background pixel point.

[0038] (2) Perform dense feature point matching on any two images: for each image i (i=1,2,...,M), generate a 128-dimensional dense sift feature for each pixel on the image; The dense sift feature of picture i (i=1,2,...,M) and the dense sift feature of other pictures j (j=1,2,...,M and j≠i) adopt Kim et ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to an image object co-segmentation method guided by local shape transfer, comprising: inputting M images containing objects of the same semantic category, performing a saliency analysis on each image, and generating an initial foreground and background segmentation result; performing dense segmentation on any two images Feature point matching; according to the matching results, establish the correspondence between each local image area and local areas from other images; use the local linear structure preservation algorithm to learn the weight of the corresponding relationship; use the iterative solution algorithm to transfer between the corresponding local areas The foreground and background segmentation results are obtained to obtain the final segmentation results. The invention has good performance in the co-segmentation of image objects of the same semantic category, and can be applied to the fields of image content understanding, image object recognition and the like.

Description

technical field [0001] The invention belongs to the technical fields of image processing and computer vision, and relates to an image object co-segmentation method guided by local shape transfer. Background technique [0002] Given an image set containing objects of the same semantic category, image object co-segmentation technology mainly considers how to segment common objects from it, so as to carry out higher-level visual understanding tasks such as image content understanding and object detection. In 2006, Rother et al. first proposed the concept of image object co-segmentation, using a generative model to perform object foreground and background segmentation on image pairs containing the same category. This method uses a Gaussian model to generate a potential foreground histogram, and image pairs in The difference of the foreground histogram is added as a global constraint to the energy based on the Markov random field, and finally the TRGC optimization algorithm is us...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/46G06T7/11G06T7/136
CPCG06T2207/20081G06T2207/20016G06T2207/10004G06V10/462
Inventor 陈小武滕炜张宇李甲赵沁平
Owner BEIHANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products