An image-to-image translation method based on a discriminant region candidate adversarial network

A region and image technology, applied in the field of image processing, can solve the problems of unbalanced color distribution, artifacts, low image resolution, etc., and achieve the effect of high resolution and less artifacts

Inactive Publication Date: 2019-01-08
OCEAN UNIV OF CHINA
View PDF5 Cites 22 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The present invention provides an image-to-image translation method based on discriminative region candidate confrontation network to solve technical problems such as artifacts, unbalanced color distribution, and low resolution of converted images in image-to-image translation in the prior art. , the image-to-image translation method based on discriminative region candidate confrontation network proposed by the present invention can synthesize high-quality images with high resolution, real details and fewer artifacts

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • An image-to-image translation method based on a discriminant region candidate adversarial network
  • An image-to-image translation method based on a discriminant region candidate adversarial network
  • An image-to-image translation method based on a discriminant region candidate adversarial network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0040] The embodiments of the present application are preferred embodiments of the present application.

[0041] An image-to-image translation method based on a discriminative region candidate adversarial network. The embodiment of the present application proposes a discriminative region candidate adversarial network (DRPANs, Discriminative Region Proposal Adversarial Networks) for high-quality image-to-image translation. The discriminative region candidate adversarial network It includes a generator, an image block discriminator and a corrector, wherein the image block discriminator (Patch Discriminator) uses a PatchGAN Markovian discriminator to extract a discriminant region to generate a masked fake image.

[0042] Such as figure 2 As shown, the method includes the following steps:

[0043] S1: Input the semantic segmentation map of the real image into the generator to generate the first image;

[0044] Among them, image semantic segmentation means that the machine autom...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides an image-to-image translation method based on a discriminant region candidate adversarial network. The method includes inputting a semantic segmentation map of a real image to the generator to generate a first image; inputting the first image into the image block discriminator, and predicting a score map by the image block discriminator; using a sliding window to find the most obvious artifact region image block in the score map, mapping the artifact region image block into the first image to obtain a discrimination region in the first image; performing a mask operationon the real image using the discrimination region to obtain a false image after the mask; inputting the real image and the false image after the mask into the corrector for judging whether the input image is true or false wherein the generator generates an image closer to the real image based on the correction of the corrector. The method can synthesize high-quality images with high resolution, real details and fewer artifacts.

Description

technical field [0001] The invention relates to the technical field of image processing, in particular to an image-to-image translation method based on a discriminative region candidate confrontation network. Background technique [0002] From the perspective of human visual perception, we consider a synthetic image to be fake, usually because it contains local artifacts. While it looks real at first glance, we can easily tell the real from the fake by staring at it for only about 1000ms. Humans have the ability to map a realistic scene from coarse structure to fine detail, that is, we usually get the global structure of a scene while paying attention to the details of an object, and understand how it is related to the surrounding environment. [0003] Many efforts have been made to develop automatic image translation systems. A straightforward approach is to optimize the L1 or L2 loss on the pixel space, however both suffer from the blurring problem. Therefore, some work...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/00G06T7/11G06T3/40
CPCG06T3/4053G06T7/0002G06T2207/20081G06T2207/20084G06T2207/30168G06T2207/30181G06T7/11
Inventor 郑海永王超俞智斌
Owner OCEAN UNIV OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products