Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method for generating virtual multi-viewpoint images based on depth image layering

A multi-viewpoint image and depth image technology, which is applied in the field of virtual multi-viewpoint image generation, can solve the problems of large amount of calculation, time-consuming, difficult to obtain camera array model parameters, etc.

Active Publication Date: 2010-12-01
万维显示科技(深圳)有限公司
View PDF4 Cites 49 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This method requires complex mapping operations, which requires a large amount of calculation, takes a long time, and requires high accuracy of depth data, and the parameters of the camera array model are not easy to obtain.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for generating virtual multi-viewpoint images based on depth image layering
  • Method for generating virtual multi-viewpoint images based on depth image layering
  • Method for generating virtual multi-viewpoint images based on depth image layering

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0149] (1) The Racket two-dimensional image test stream with an image resolution of 640×360 and the Racket depth image test stream with a resolution of 640×360 are used as video files to generate multi-viewpoint virtual images. figure 2 (a) is a screenshot of the Racket two-dimensional image test stream, figure 2 (b) is a screenshot of the Racket depth image test stream.

[0150] (2) Perform median filtering and edge extension on the input depth image to obtain the preprocessed depth image. image 3 It is the depth image after Racket depth image preprocessing.

[0151] (3) Set the layer number N=17, perform layer processing on the preprocessed depth image, and obtain the layered depth image. Figure 4 (a) is the layer 0 layered depth image.

[0152] (4) Select the 12th layer as the focus depth layer, then the 0-11th layer is the foreground layer, and the 13th-16th layer is the background layer.

[0153] (5) According to the layered depth image, the Racket two-dimensional...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a method for generating virtual multi-viewpoint images based on depth image layering, which comprises the following steps: (1) preprocessing depth images to be processed; (2) layering the preprocessed depth images so as to obtain layered depth images; (3) selecting depth layers focused by camera arrays, and determining prospect layers and background layers; (4) layering two dimensional images to be processed so as to obtain layered two dimensional images; (5) calculating valves of parallax errors corresponding to the two dimensional image layers corresponding to various depth layers; (6) expanding the layered two dimensional images so as to obtain expanded layered two dimensional images; (7) obtaining the virtual two-dimensional images of various virtual viewpoint positions by using a weighted level translation algorithm. By using the method of the invention, virtual multi-viewpoint images required by a multi-viewpoint auto-stereoscopic display system can be generated rapidly and effectively without parameters of a virtual multi-viewpoint camera array model, and the method has good fault-tolerant capacity to the input depth images.

Description

technical field [0001] The invention relates to a method for generating virtual multi-viewpoint images in a multi-viewpoint autostereoscopic display system, in particular to a method for generating virtual multi-viewpoint images based on depth map layers. Background technique [0002] Multi-viewpoint autostereoscopic display technology is a technology that simultaneously presents images of multiple different viewpoints of the same scene to the audience, and the audience can see stereoscopic images in multiple positions without wearing glasses. The technique requires multiple 2D images of the same scene from different viewpoints. [0003] In order to obtain multiple 2D images of the same scene from different viewpoints, there are currently two main solutions. [0004] One method is to use a camera array composed of multiple cameras to shoot simultaneously, and then simultaneously transmit multiple pieces of two-dimensional image data to the display terminal for stereoscopic ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): H04N13/00G06T11/00
Inventor 席明薛玖飞王梁昊李东晓张明
Owner 万维显示科技(深圳)有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products