Dynamic-to-static scene conversion method based on conditional generative adversarial network

A technology of conditional generation and scene conversion, applied in biological neural network models, computer components, instruments, etc., can solve the problems of poor performance in image texture and details, achieve optimized semantic consistency, improve authenticity, and improve stability Effect

Pending Publication Date: 2021-03-16
SOUTHEAST UNIV
View PDF0 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The existing conversion method is to use the conditional generation confrontation network (P.Isola, J.-Y.Zhu, T.Zhou, and A.A.Efros. Image-to-image translation with conditional adversarial networks [C]. CVPR, 2017), but this This method will generate all the pixels of the static scene image, although the average pixel error between the whole image and the real image can be small, but due to the influence of the mean value, the method performs poorly in image texture and details, and in robot pose Image details and texture information are particularly important in estimation and localization tasks

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Dynamic-to-static scene conversion method based on conditional generative adversarial network
  • Dynamic-to-static scene conversion method based on conditional generative adversarial network
  • Dynamic-to-static scene conversion method based on conditional generative adversarial network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0014] Embodiment 1: see figure 1 , a dynamic-to-static scene conversion method based on a conditional generative confrontation network, including the following steps:

[0015] Step 1: Data preprocessing stage;

[0016] Step 2: Model construction phase, the model includes a cascaded two-stage generation network from coarse to fine, and two types of discriminant networks,

[0017] Step 3: Model parameter training phase,

[0018] Step 4: Dynamic to static scene image conversion module stage;

[0019] In the data preprocessing stage, the data is processed to meet the requirements of the network. Firstly, the images in the data set are randomly cropped and scaled to make training data, and multiple sets of training data are constructed using the dynamic scene at the same location, the binary mask of the dynamic target, and the static scene as a set of data.

[0020] In the model building phase, the corresponding deep neural network is designed and built according to the model....

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a dynamic-to-static scene conversion method based on a conditional generative adversarial network, and the method is characterized in that a generator is a cascade two-stage network from coarse to fine, and discriminators are PatchGAN and SN-PatchGAN. According to the method, a dynamic region binary mask is deduced according to the difference between the output result of the coarse network and the pixel value of the dynamic scene, and then static recovery is performed on the dynamic region through the fine network. After deep-layer and shallow-layer features of a finenetwork coding area are extracted, a context attention mechanism is adopted to optimize dynamic area scene generation. Compared with a traditional discriminator, the discriminator adopted by the invention can pay more attention to image details, and the training process is more stable. Compared with a traditional dynamic-to-static scene conversion method, the method has the advantages that the extracted dynamic target area is more accurate, and the generated dynamic scene image is rich in texture and closer to the real situation.

Description

technical field [0001] The invention relates to a conversion method, in particular to a dynamic-to-static scene conversion method based on a conditional generative confrontation network, and belongs to the technical field of deep learning and image generation. Background technique [0002] Dynamic scenes pose a high challenge to vision-based robot pose estimation and localization tasks. The traditional solution is to use dynamic object detection technology to distinguish the effectiveness of image regions, and only use static regions for pose estimation or positioning by discarding dynamic region information. However, when the detection of the dynamic target area is inaccurate or the dynamic area is too large, this method will lead to inaccurate or too little effective information, which in turn will lead to increased pose estimation or localization errors. In order to improve this phenomenon, a dynamic-to-static scene conversion method was proposed (Bescos B, Neira J, Sieg...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/32G06K9/62G06N3/04
CPCG06V20/10G06V10/25G06N3/045G06F18/214
Inventor 吴麟孙长银陆科林徐乐玏
Owner SOUTHEAST UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products