A Cross-Domain Variational Adversarial Autoencoder Method

A technology of self-encoding and self-encoder, which is applied in the field of cross-domain variational confrontation self-encoding, and achieves good results

Active Publication Date: 2022-03-29
BEIFANG UNIV OF NATITIES
View PDF6 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In the fields of industrial design and virtual reality, designers always hope to provide a picture to generate a series of continuously transformed pictures of the target domain. Existing methods cannot meet this demand.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Cross-Domain Variational Adversarial Autoencoder Method
  • A Cross-Domain Variational Adversarial Autoencoder Method
  • A Cross-Domain Variational Adversarial Autoencoder Method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0038] The present invention will be further described below in conjunction with specific examples.

[0039] The cross-domain variational adversarial self-encoding method provided in this embodiment realizes the one-to-many continuous transformation of cross-domain images without providing any paired data, such as figure 1 As shown in , showing our overall network framework, the encoder decomposes samples into content-encoded and style encoding Content coding is used for confrontation, and style coding is used for variation. The decoder concatenates the content code and the style code to generate an image. It includes the following steps:

[0040] 1) Use encoders to decouple content encoding and style encoding for cross-domain data.

[0041] Firstly, the content coding and style coding of the image are decomposed by the encoder, and the corresponding posterior distribution is obtained. For content encoding, an adversarial autoencoder (AAE) is introduced; for style encod...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a cross-domain variational confrontation self-encoding method, comprising the steps of: 1) using an encoder to decouple the content encoding and style encoding of cross-domain data; 2) using confrontation operations and variational operations to fit image Content coding and style coding; 3) Image reconstruction is achieved by splicing content coding and style coding, and a one-to-many continuous transformation of cross-domain images is obtained by cross-splicing content coding and style coding in different domains. The method of the invention realizes one-to-many continuous transformation of cross-domain images without providing any paired data.

Description

technical field [0001] The present invention relates to the technical field of computer vision, in particular to a cross-domain variational confrontation self-encoding method. Background technique [0002] In the field of computer vision, the use of single-domain data for image generation and image translation has achieved very good results. However, in real life and applications, these data usually come from different domains. For example, an object can have two representations of sketch and view, the same text content can be in different fonts, and so on. How to process cross-domain data is an important research direction. Existing cross-domain work mainly focuses on generating confrontation network GAN. This type of method achieves image generation by automatically fitting the posterior distribution through adversarial learning on data from different domains. In the learning process, pairs of data samples are always required, which has relatively high requirements for...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06T3/00G06T9/00G06N3/08
CPCG06F18/214
Inventor 白静田栋文张霖杨宁
Owner BEIFANG UNIV OF NATITIES
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products