Method for generating virtual-real fusion image for stereo display

A virtual-real fusion and image generation technology, applied in image enhancement, image analysis, image data processing, etc., can solve the problem of insufficient tracking accuracy, achieve realistic three-dimensional display effect, and realize the effect of occlusion judgment and collision detection

Active Publication Date: 2015-04-08
ZHEJIANG UNIV
View PDF4 Cites 44 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

On this basis, many camera tracking schemes based on monocular RGB cameras are emerging, such as the MonoSLAM system proposed by Davison, and ...

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for generating virtual-real fusion image for stereo display
  • Method for generating virtual-real fusion image for stereo display
  • Method for generating virtual-real fusion image for stereo display

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0033] In order to describe the present invention more specifically, the technical solutions of the present invention will be described in detail below in conjunction with the accompanying drawings and specific embodiments.

[0034] The method for generating a virtual-real fusion image for stereoscopic display of the present invention comprises the following steps:

[0035] (1) Using a monocular RGB-D camera to obtain scene depth information and color information.

[0036] (2) Use the camera tracking module to determine the camera parameters of each frame according to the 3D scene reconstruction model, and at the same time integrate the scene depth information and color information into the 3D scene reconstruction model frame by frame.

[0037] 2.1 Use the Raycast algorithm to extract the depth map of the previous frame from the 3D scene reconstruction model according to the saved camera pose of the previous frame;

[0038] 2.2 Preprocess the depth map of the current frame. ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a method for generating a virtual-real fusion image for stereo display. The method comprises the following steps: (1) utilizing a monocular RGB-D camera to acquire a depth map and a color map in real scene; (2) rebuilding a three-dimensional scene surface model and calculating a camera parameter; (3) mapping, thereby acquiring the depth map and the color map in a virtual viewpoint position; (4) finishing the three-dimensional registration of a virtual object, rendering for acquiring the depth map and the color map of the virtual object, and performing virtual-real fusion, thereby acquiring a virtual-real fusion content for stereo display. According to the method provided by the invention, the monocular RGB-D camera is used for shooting, the three-dimensional scene surface model is rebuilt frame by frame and the model is simultaneously used for tracking the camera and mapping the virtual viewpoint, so that higher camera tracking precision and virtual object registration precision can be acquired, the cavities appearing in the virtual viewpoint drawing technology based on the image can be effectively handled, the shielding judgment and collision detection for the virtual-real scene can be realized and a stereo display device can be utilized to acquire a vivid stereo display effect.

Description

technical field [0001] The invention belongs to the technical field of three-dimensional stereoscopic imaging, and in particular relates to a virtual-real fusion image generation method for stereoscopic display. Background technique [0002] Stereoscopic display technology has become the main theme in the fields of IT, communications, and radio and television, and has also become a familiar term. With the improvement of stereoscopic display technology, people's enthusiasm and expectations for 3D content are also getting higher and higher. With the continuous increase of market demand, people are looking for a more convenient, quick and low-cost way to generate 3D content. The fusion of virtual and real refers to the technology of integrating virtual information into the real world through computer technology, which has broad application prospects in the fields of medicine, entertainment and military. The combination of stereoscopic display technology and virtual-real fusio...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T5/50G06T7/00G06T17/00
Inventor 张骏飞王梁昊李东晓张明
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products