Virtual visual point synthesizing method based on depth and block information

A technology of virtual view point and synthesis method, which is applied in the field of virtual view point synthesis based on depth and occlusion information, which can solve problems such as virtual view point holes, difficult to solve hole problems, and unpredictability, so as to improve accuracy, solve occlusion problems, and improve image quality. quality effect

Active Publication Date: 2008-11-26
万维显示科技(深圳)有限公司
View PDF0 Cites 64 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The main disadvantage of the method of 2D video plus its depth to restore the virtual viewpoint is that the hole problem is difficult to solve, and the hole problem occurs because some object regions are invisible in the re

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Virtual visual point synthesizing method based on depth and block information
  • Virtual visual point synthesizing method based on depth and block information
  • Virtual visual point synthesizing method based on depth and block information

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0063] The virtual viewpoint synthesis method based on depth and occlusion information includes the following steps:

[0064] 1) Under the parallel optical axis camera array model, determine the 3D rendering equation based on the depth image rendering method;

[0065] The reference viewpoint and the camera of the drawn viewpoint conform to the parallel optical axis camera array model, such as figure 1 As shown, this simplifies the rendering equations for depth image-based rendering methods.

[0066] refer to figure 1 The optical axis-parallel camera array model shown, camera O 1 As the reference viewpoint, the camera O 2 is the viewpoint to be drawn, the camera O 1 The coordinate system coincides with the world coordinate system, the focal length of the camera is f, and the viewpoint O is drawn 2 Relative to the reference viewpoint O 1 There is only horizontal displacement c, no rotation, and the depth z of the same object relative to the reference viewpoint is the sam...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a virtual viewpoint synthesis method based on depth and shade information, comprising steps of: (1) defining a 3D plotting equation of a plotting method based on a depth image; (2) according to the spatial positions of the reference viewpoints and the virtual viewpoints, forming a pixel plotting sequence; (3) adopting a horizontal half pixel plotting method based on a depth image, referring to the main viewpoint, plotting an image of the virtual viewpoints; (4) according to the shade information of the auxiliary viewpoints, processing shade compensation on the virtual viewpoint image, to attain a virtual viewpoint image of high resolution; (5) converting the image of high resolution to attain an image of original resolution; (6) adopting an asymmetry linear interpolation method to process residual cavity compensation; (7) processing block effect smoothing to attain the virtual viewpoint synthesis image of high quality. The invention is suitable for parallel optical axis camera array mode, wherein the virtual viewpoints for synthesis are between the main viewpoint and the auxiliary viewpoints. Therefore, the invention can effectively synthesize virtual viewpoint images of high quality.

Description

technical field [0001] The invention relates to the field of image processing, in particular to a method for synthesizing virtual viewpoints based on depth and occlusion information. Background technique [0002] With the development of video-related technologies, people are gradually pursuing free viewpoint and stereo vision for next-generation video applications, which requires that the corresponding 3D scenes can be completely reconstructed on the display side. [0003] To obtain free stereo vision on the display side and reconstruct the corresponding 3D scene, it is necessary to obtain video images from multiple viewpoints or even any viewpoint of the 3D scene. For this purpose, there are mainly two ways to realize it: one is the 3D modeling method based on the model; the other is the synthesis method based on the image. The quality of the viewpoint image reconstructed by the model-based 3D modeling method is higher, but at the same time, the computational complexity is...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): H04N13/00H04N7/26G06T7/00
Inventor 冯雅美李东晓张明石冰骆凯谢贤海何赛军
Owner 万维显示科技(深圳)有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products