Automatic matting method based on semantic segmentation and significance analysis

A semantic segmentation and saliency technology, applied in image analysis, image enhancement, image data processing and other directions, can solve the problems of image calculation saliency errors, inability to generalize, etc., to achieve the effect of saving manpower and accurate matting results

Active Publication Date: 2020-02-04
NANJING INST OF TECH
View PDF9 Cites 16 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] Some methods are designed for matting tasks of certain specific image content, such as portrait recognition, using a priori assumptions to use a deep model for training, and then automatically extract these specific image content, this method is only effective for certain types of tasks , cannot be used universally
[0006] There are also some methods that assume that what the user is interested in is the most prominent part in the image, and then calculate the saliency map to obtain the tripartite image, and then complete the image matting. On the one hand, the result obtained by this method may not be what the user wants to extract. Objects, on the other hand, the current saliency detection algorithm is usually only effective for a certain local image with a clear subject, and it is easy to get wrong results for the whole image containing a variety of content.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Automatic matting method based on semantic segmentation and significance analysis
  • Automatic matting method based on semantic segmentation and significance analysis
  • Automatic matting method based on semantic segmentation and significance analysis

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0059] Using the automatic map-cutting method of the present invention to Figure 6 The image shown is processed, and the object to be extracted is "person". The specific steps are as follows:

[0060] Step 1), input the object you want to cut out as "person" and the image to be processed;

[0061] Step 2), use the semantic segmentation method to process Figure 6 , preferably using the method in Document 1 to obtain a semantic segmentation map, such as Figure 7 shown;

[0062] Step 3), judgment Figure 7 In the pixel that has the category "person", go to step 4);

[0063] Step 4), from Figure 7 Get the target subgraph area containing the category "person" in the Figure 6 The target subgraph set is obtained by cropping in the middle, and there are two subgraphs in the set, such as Figure 9 with Figure 10 shown;

[0064] Step 5), using the significance detection method to process Figure 9 Get a saliency map, such as Figure 11 As shown in the leftmost figure, t...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an automatic matting method based on semantic segmentation and significance analysis. The automatic matting method sequentially comprises the following steps: acquiring a target sub-graph set conforming to a matting category input by a user; obtaining a significance value of each pixel in the target sub-graph; solving a tripartite graph according to the pixel significance value; and according to the original image and the tripartite graph, calculating foreground transparency of the image by utilizing an image matting algorithm, and outputting an image matting result. According to the invention, a user only needs to input an image matting category; a target specified by a user can be automatically searched in the input image and an image matting result is output; tedious interactive operation in a traditional image matting task is avoided, the method can be widely applied to the related technical field related to image matting operation, is particularly suitablefor large-batch unattended image matting application scenes, greatly saves labor, is suitable for different image matting objects, and is high in universality and higher in image processing result accuracy.

Description

technical field [0001] The invention relates to the field of digital image processing, in particular to an automatic image cutout method based on semantic segmentation and saliency analysis. Background technique [0002] Extracting foreground objects with fine edges from still images or video sequences is often referred to as matting. With the popularization of camera equipment on mobile phones, the demand for cutout applications is more and more extensive, such as retouching by ordinary users, and extraction of picture elements by professional picture editors. The current matting methods are mainly semi-automatic methods that require user interaction, and the user interaction methods of matting can be roughly divided into two categories: trimap and specified lines (strokes), such as figure 1 As shown in , the foreground transparency can be obtained after the original image and the trimap or the specified line are input, and then the matting algorithm is calculated, and the...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/11G06T7/187G06T7/194G06T5/30
CPCG06T5/30G06T7/11G06T7/187G06T7/194
Inventor 林忠黄陈蓉卢阿丽周静波
Owner NANJING INST OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products