Virtual viewpoint drawing method based on space-time combination in multi-view video

A technology of virtual viewpoint and viewpoint synthesis, which is applied in the field of video and multimedia signal processing, and can solve problems such as inaccuracy of virtual viewpoint

Inactive Publication Date: 2013-08-14
SHANDONG UNIV
View PDF4 Cites 30 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] In order to solve the inaccurate problem of virtual viewpoint synthesis caused by image post-processing based on spatial domain in the existing view synthesis method based on depth image, the present invention proposes a new view synthesis method based on depth map

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Virtual viewpoint drawing method based on space-time combination in multi-view video
  • Virtual viewpoint drawing method based on space-time combination in multi-view video
  • Virtual viewpoint drawing method based on space-time combination in multi-view video

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0041] The invention was experimented with a "Mobile" video sequence. The video sequence collects videos from 9 viewpoints of a scene, and provides corresponding depth information as well as internal and external parameters of each camera. In the experiment, we selected the No. 4 viewpoint as the reference viewpoint, and the No. 5 viewpoint as the virtual viewpoint.

[0042] figure 1 Shown is the flowchart of the present invention, according to the flowchart, we introduce its specific implementation.

[0043] (1) 3D image transformation. The so-called 3D image transformation is to project the pixels in the reference viewpoint into the virtual viewpoint plane according to the camera projection principle. This process is mainly divided into two parts. First, the pixels in the reference viewpoint are projected into the 3D space, and then projected from the 3D space into the virtual viewpoint plane. figure 2 Color image and depth image for the reference viewpoint. Suppose th...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a novel viewpoint synthesis method based on a depth map. The method comprises the steps of firstly, obtaining a virtual viewpoint color image and a virtual viewpoint depth image through 3D (three-dimensional) image transformation, and removing small cavities and mapping error points; then, carrying out cavity filling on the virtual viewpoint depth image, and recording pixel coordinates at the cavities; afterwards, carrying out reversed 3D image transformation, locating a target area, for forming the cavities, in a target frame of a reference viewpoint, and carrying out background recovery on the target area by utilizing the previous frame and the next frame; and finally, repairing the remaining cavities by using an image repairing algorithm based on a sample. According to the method, time and space domains are combined for filling the cavities by utilizing the image information of the previous frame and the next frame, such a cavity filling operation has the advantages that the result is more accurate, and the quality of the virtual viewpoint image is improved compared with the cavity filling operation purely based on the space domain; and in addition, the target area is located via reversed mapping, the background recovery is performed in a targeted way, and the larger-scale background recovery can be used for realizing more efficient cavity filling in the similar result condition.

Description

technical field [0001] The invention relates to a method for synthesizing virtual viewpoints based on depth images, and belongs to the technical field of video and multimedia signal processing. Background technique [0002] With good user interaction and vivid visual experience, 3D TV has become the leader of the new generation of multimedia technology. The construction of the depth information of the real scene makes the stereoscopic TV give people a feeling that the scene is stretched out of the screen and can be touched by hand. Among them, multi-viewpoint video is considered to have extremely broad application prospects. Its main realization goal is that on the playback side, for the same scene, users can choose different viewpoints to enjoy the scene from different angles according to their own needs, so as to obtain a strong sense of presence and reality. However, the limitation of transmission bandwidth and transmission rate increases the difficulty of realizing mul...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): H04N13/04H04N15/00
Inventor 刘琚成聪杨晓辉
Owner SHANDONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products