Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Virtual viewpoint image generation method based on generation type confrontation network

A technology for virtual viewpoint and image generation, applied in the field of stereo vision and deep learning, it can solve problems such as inapplicability of natural images, and achieve the effect of making up for the lack of hardware and having strong scalability.

Active Publication Date: 2018-09-04
TIANJIN UNIV
View PDF4 Cites 40 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, past research, such as the generation method of virtual viewpoint images based on depth maps, often relies on prior information such as depth maps and disparity maps of two viewpoint images.
But this is not applicable to more general natural images when depth information or disparity information is unknown

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Virtual viewpoint image generation method based on generation type confrontation network
  • Virtual viewpoint image generation method based on generation type confrontation network
  • Virtual viewpoint image generation method based on generation type confrontation network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0017] The invention uses the generative confrontation network model in deep learning, takes the road scene image of the KITTI dataset as the research object, and realizes the virtualization of the road scene image based on the monocular image without relying on information such as depth and parallax. viewpoint image generation, and can be generalized to virtual viewpoint image generation applied to other natural images.

[0018] In order to make the purpose and technical solution of the present invention clearer, the embodiments of the present invention will be further described in detail below.

[0019] 1. Build the dataset

[0020] This experiment uses the KITTI 2015 three-dimensional data set. Due to its limited data volume, the present invention solves it through data enhancement. According to the characteristics of binocular images, this experiment adopts traditional data enhancement methods, including horizontal flipping, cropping and other methods. Through data enhan...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a virtual viewpoint image generation method based on a generation type confrontation network. The method comprises the following steps: first step, making a data set: obtaining an image pair necessary for training the generation type confrontation network; constructing a model: the structures used by a generator and a discriminator are that batch normalization layers BatchNorm and nonlinear operation unit ReLU activation functions are connected behind convolutional layers, all convolutional layers use a 4*4 convolution kernel size, the step size is set as 2, the lengthand width are both reduced to a half of the original length and width when downsampling is performed on a feature image, the length and width are both amplified to twice of the original length and width during upsampling, and a Dropout layer sets a Dropout rate as 50%; the RelU activation function is LeakyReLu; defining the loss; and training and testing the model.

Description

technical field [0001] The invention belongs to the field of stereo vision and deep learning, and relates to a method for generating virtual viewpoint images using a generative confrontation network model. Background technique [0002] In daily life, when human beings look at objects with both eyes, the horizontally arranged left and right eyes respectively observe objects from slightly different angles, so the images captured by the left and right eyes are slightly different. Due to the perception of the human visual system and the brain, the left and right eye views are fused in the brain, so that humans can feel the obvious depth from their subtle differences, and establish the corresponding relationship between features, and combine the physical points of the same space in different images. The image points correspond to produce a three-dimensional concept of the observed things. [0003] Binocular stereo vision is an important form of machine vision, and its most basic...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): H04N13/111H04N13/161H04N13/261H04N13/275
Inventor 侯春萍莫晓蕾杨阳管岱夏晗
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products