Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Quasi-three dimensional reconstruction method for acquiring two-dimensional videos of static scenes

A two-dimensional video, quasi-three-dimensional technology, applied in 3D modeling, image data processing, instruments, etc., can solve problems such as loss of edge information in depth maps

Inactive Publication Date: 2013-08-07
NANJING UNIV OF POSTS & TELECOMM
View PDF5 Cites 39 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The depth map filtering method is relatively simple and can reduce the artificial traces of the occlusion area in the virtual viewpoint image, but it also causes the depth map to lose part of the edge information

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Quasi-three dimensional reconstruction method for acquiring two-dimensional videos of static scenes
  • Quasi-three dimensional reconstruction method for acquiring two-dimensional videos of static scenes
  • Quasi-three dimensional reconstruction method for acquiring two-dimensional videos of static scenes

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0036] The technical scheme of the present invention is described in detail below in conjunction with accompanying drawing:

[0037] The idea of ​​the present invention is to use the method of epipolar line correction combined with stereo matching to solve the video disparity map sequence, avoiding the high computational complexity of SFM, BP, image segmentation and bundle adjustment and optimization required in the 3D video reconstruction method with the help of MVS The operation process simplifies the solution process of the video disparity map sequence. The present invention further adopts a simple and easy-to-operate quasi-Euclidean epipolar correction method. As a preferred embodiment of the method of the present invention, first, for each frame in the two-dimensional video, extract another frame with a fixed number of frames apart from it, and simulate a dual-viewpoint image; then adopt a quasi-Euclidean epipolar line correction method to correct the dual-viewpoint image...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a quasi-three dimensional reconstruction method for acquiring two-dimensional videos of static scenes, which belongs to the field of computer vision three-dimensional video reconstruction. The method comprises the following steps: step A, extracting double-viewpoint image pairs from each frame of a two-dimensional video; step B, respectively and polarly correcting each double-viewpoint image; step C, adopting a binocular stereo matching method based on overall optimization to respectively solve overall optimum disparity maps of all the polarly corrected double-viewpoint images; step D, reversely correcting the overall optimum disparity maps so as to obtain the corresponding disparity maps of all the frames in the three-dimensional video; step E, splicing the disparity maps obtained in the step D according to a corresponding video frame sequence to form a disparity map sequence, and optimizing the disparity map sequence; and step F, combining all the extracted video frames and the corresponding disparity maps, adopting a depth image based rendering (DIBR) method to recover virtual viewpoint images, and splicing the virtual viewpoint images into a virtual viewpoint video. The method is low in computational complexity, simple and practicable.

Description

technical field [0001] The invention relates to a quasi-three-dimensional reconstruction method for two-dimensional video capturing still scenes, and belongs to the field of three-dimensional video reconstruction of computer vision. Background technique [0002] Two-dimensional (2D) video refers to a single-channel video captured by an ordinary camera. Three-dimensional (3D) video refers to two-way video captured by a stereo camera. The quasi-3D reconstruction of 2D video is mainly based on the principle of binocular stereo vision. Its task is to reconstruct another virtual viewpoint video based on the scene depth information hidden in the 2D video to simulate the binocular viewing process. Related research work has gradually increased since the early 1990s, mainly focusing on video depth information recovery and virtual viewpoint video generation. In the past two years, driven by the market demands of 3D movies and TVs, video 3D reconstruction has become a research hotsp...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T17/00H04N13/00
Inventor 刘天亮王亮莫一鸣朱秀昌
Owner NANJING UNIV OF POSTS & TELECOMM
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products