Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Generative adversarial network-based pixel-level portrait cutout method

A pixel-level, network technology, applied in the field of computer vision, can solve the problems of time-consuming and labor-intensive data sets, large data set calibration and production costs, and small data sets, so as to enhance robustness, improve segmentation smoothness, The effect of high segmentation accuracy

Active Publication Date: 2018-04-20
XIDIAN UNIV
View PDF8 Cites 67 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

But because of its simple network structure, it often requires a lot of training set images to train the network
In addition, the data sets used for pixel-level portrait matting are relatively small, and the calibration and production costs of the data sets are huge. A training labeled image sample requires half an hour of manual calibration. However, the segmentation model based on the full convolutional network requires Tens of thousands of images in the training set can achieve good results. It can be seen that manual labeling to obtain available data sets is time-consuming and labor-intensive

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Generative adversarial network-based pixel-level portrait cutout method
  • Generative adversarial network-based pixel-level portrait cutout method
  • Generative adversarial network-based pixel-level portrait cutout method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0027] The present invention aims at the problem of inefficiency caused by the need for a large number of training set images in the existing portrait matting method, conducts research and innovation, and proposes a pixel-level portrait matting method based on a generative confrontation network. For real images containing portraits, like the fully convolutional network training model, the present invention also needs to obtain labeled images that separate real portraits and backgrounds by manual labeling, but the number of images in the training set is much smaller. see Figure 7 b, This is a human-annotated portrait and background separation image. see Figure 6 , the portrait matting method of the present invention comprises the following steps:

[0028] (1) Preset network: Preset the generation network and the discriminant network, and set the two networks to the confrontation learning mode, that is, the loss function of the generative network is obtained through the loss...

Embodiment 2

[0036]The overall technical scheme of the pixel-level portrait matting method based on generating confrontation network is the same as that in embodiment 1, the generating network described in step 1 is a deep neural network with skip connections, and the deep neural network with skip connections is used to form the generating network The gradient transfer path of the skip connection (Skip Connection) between the N serial encoder layers and the N serial decoder layers, which is also an identity mapping, see figure 1 , specifically connecting the 3-8 layer encoder layer to the 11-16 layer decoder layer, for example, the output of the 3rd layer encoder layer is simultaneously input to the 4th layer encoder layer and the 16th layer decoder layer, wherein, The output of the 3rd layer encoder layer to the 4th layer encoder layer is the basic output, and the output of the 3rd layer encoder layer to the 16th layer decoder layer is the result of the skip connection, and so on, forming...

Embodiment 3

[0038] The overall technical scheme of the pixel-level portrait matting method based on the generative confrontation network is the same as that of Embodiment 1-2. In the present invention, a random deactivation mechanism (Dropout) is introduced into the decoder layer of the generative network, specifically in the decoder of the generative network The inactivation parameters are randomly discarded before the final output of the layer, that is, each decoder layer of the generation network randomly throws away the respective inactivation parameters before the final output, specifically, the value of the respective inactivation part parameter output is randomly set to 0 , which simplifies many unnecessary calculations and ensures the robustness of the network structure.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a generative adversarial network-based pixel-level portrait cutout method and solves the problem that massive data sets with huge making costs are needed to train and optimizea network in the field of machine cutout. The method comprises the steps of presetting a generative network and a judgment network of an adversarial learning mode, wherein the generative network is adeep neural network with a jump connection; inputting a real image containing a portrait to the generative network for outputting a person and scene segmentation image; inputting first and second image pairs to the judgment network for outputting a judgment probability, and determining loss functions of the generative network and the judgment network; according to minimization of the values of theloss functions of the two networks, adjusting configuration parameters of the two networks to finish training of the generative network; and inputting a test image to the trained generative network for generating the person and scene segmentation image, randomizing the generated image, and finally inputting a probability matrix to a conditional random field for further optimization. According tothe method, a training image quantity is reduced in batches; and the efficiency and the segmentation precision are improved.

Description

technical field [0001] The present invention relates to the field of computer vision technology, in particular to a pixel-level portrait matting method, in particular to a pixel-level portrait matting method based on a generative confrontation network, which is used to separate portraits from backgrounds. Background technique [0002] portrait Figure 1 It has always been a hot issue in the field of computer vision. Pixel-level portrait matting requires accurately extracting the foreground of an object from the background, which belongs to a more refined binary semantic segmentation problem. [0003] With the rapid development of e-commerce, portrait matting has a very wide range of application scenarios. For example, more and more people choose to buy clothing on the Internet, so the e-commerce function of searching for things by image came into being. It is very difficult to accurately search for similar clothing, so it is necessary to segment the portraits in the picture...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/194G06T7/11G06T7/143
CPCG06T2207/20076G06T2207/20081G06T2207/20084G06T7/11G06T7/143G06T7/194
Inventor 王伟周红丽王晨吉方凌
Owner XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products