Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-camera dynamic scene 3D (three-dimensional) rebuilding method based on joint optimization

A multi-camera, dynamic scene technology, applied in 3D modeling, image analysis, image data processing, etc., can solve problems such as inability to handle optical flow estimation, insufficient robustness, etc.

Inactive Publication Date: 2013-04-17
ZHEJIANG UNIV
View PDF3 Cites 13 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The methods of Larsen and Lei respectively use the method of belief propagation in the space-time domain and the method of 3D smoothing in the time domain to optimize the depth value, making these methods not robust enough to deal with serious errors in optical flow estimation.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-camera dynamic scene 3D (three-dimensional) rebuilding method based on joint optimization
  • Multi-camera dynamic scene 3D (three-dimensional) rebuilding method based on joint optimization
  • Multi-camera dynamic scene 3D (three-dimensional) rebuilding method based on joint optimization

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0121] The invention discloses a multi-camera dynamic scene 3D reconstruction method based on joint optimization, which can solve the time-space consistent depth restoration and static / dynamic double-layer segmentation at the same time, and does not require accurate static / dynamic segmentation, and the method is robust And no prior knowledge of the background is required. This method allows the participating multi-cameras to move freely and independently, and can handle dynamic scenes captured by only 2~3 cameras. Such as figure 1 As shown, this method mainly includes three steps: First, we use the synchronized video frames spanning M cameras at each time t To initialize the depth map at time t ; 2. Simultaneously carry out deep spatio-temporal consistency optimization and static / dynamic two-layer segmentation, we iteratively perform 2~3 rounds of spatio-temporal consistency optimization, so as to finally achieve high-quality dynamic 3D reconstruction. The implementation ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a multi-camera dynamic scene 3D (three-dimensional) rebuilding method based on joint optimization, which can be used for partitioning a space-time consistent state, is robust and does not need prior knowledge of a background. According to the method, multi-view shooting depth recovery and static / dynamic dual-layer partitioning which participate in shooting can be solved simultaneously, free and independent movement is realized without any accurate static / dynamic camera, and dynamic scenes shot by using certain cameras can be processed. The method comprises the following steps of: I, initializing a depth map at a moment T by using a synchronous video frame which skips M cameras at each moment T; and II, performing space-time consistence depth optimization and static / dynamic dual-layer partition, and implementing space-time consistence optimization iteratively to realize high-quality dynamic 3D rebuilding. According to the method, accurate static / dynamic partition is not required; and the method has robustness specific to error partition information generated in complex scenes, and has very high application values in the fields of 3D images, 3D animations, augmented reality, motion capture and the like.

Description

technical field [0001] The invention relates to stereo matching and depth restoration methods, in particular to a multi-camera dynamic scene 3D reconstruction method based on joint optimization. Background technique [0002] The dense depth restoration technology of video is one of the basic technologies in the field of computer mid-level vision, and it has important applications in many fields such as 3D modeling, 3D imaging, augmented reality and motion capture. These applications usually require high accuracy and spatiotemporal consistency of depth restoration results. [0003] The difficulty of dense depth recovery technology for video lies in the fact that for static and dynamic objects in the scene, the recovered depth values ​​have high precision and spatiotemporal consistency. Although the current depth restoration technology for static scenes has been able to restore high-precision depth information, the natural world is full of moving objects. For dynamic objects ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T17/00G06T7/00
Inventor 章国锋鲍虎军姜翰青
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products