Multi-view stereo vision method integrated with spatial propagation and pixel-level optimization

A space propagation and stereo vision technology, applied in the field of computer vision, can solve the problems of incomplete accuracy and low efficiency of 3D point cloud, achieve the effects of reducing repeated calculations, improving redundant calculations, and accelerating convergence time

Active Publication Date: 2018-06-29
HUAZHONG UNIV OF SCI & TECH
View PDF3 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] In view of the above defects or improvement needs of the prior art, the present invention provides a multi-view stereo vision method combining spatial propagation and pixel-level optimization, thereby solving the problems of low efficiency, incomplete and incomplete 3D point cloud obtained in the prior art. Technical issues with low precision

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-view stereo vision method integrated with spatial propagation and pixel-level optimization
  • Multi-view stereo vision method integrated with spatial propagation and pixel-level optimization
  • Multi-view stereo vision method integrated with spatial propagation and pixel-level optimization

Examples

Experimental program
Comparison scheme
Effect test

specific Embodiment approach

[0042] Its specific implementation is as follows:

[0043] (1) Obtain camera parameters according to multiple pictures of the scene, extract the depth map of each picture according to the camera parameters, and update the depth map of each picture by using space propagation;

[0044] (2) Iteratively perform pixel-level optimization and filtering on the updated depth map to obtain the processed picture;

[0045] (3) Perform depth map fusion on each pixel in the processed image in parallel to obtain a 3D point cloud of the scene.

[0046] In the embodiment of the present invention, such as figure 2 As shown, the step (1) specifically includes:

[0047] (1-1) Use any one of the multiple pictures of the scene as a reference picture C r , get the neighbor picture C of the reference picture from multiple pictures of the scene 1 , C 2 , C 3 , obtain the camera parameters of the reference picture and the camera parameters of the neighboring pictures through the motion recovery ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a multi-view stereo vision method integrated with spatial propagation and pixel-level optimization, which belongs to the field of computer vision. The multi-view stereo visionmethod comprises the following steps: camera parameters are obtained according to a plurality of pictures of a scene, a depth image of each picture is extracted according to the camera parameters, andspatial propagation is utilized to update the depth image of each picture; pixel-level optimization and filtration are iteratively carried out on the updated depth images, so that processed picturesare obtained; and depth image fusion is concurrently carried out on each pixel in the processed pictures, so that a three-dimensional point cloud of the scene is obtained. In initial depth image extraction, the multi-view stereo vision method utilizes spatial propagation to reduce redundant computation between images, thus increasing efficiency; in addition, by iterative depth image optimization and filtration, while noise points are reduced, the integrity of the depth images is increased, and ultimately, while the efficiency is increased by highly efficient depth image fusion, the density ofthe point cloud is ensured.

Description

technical field [0001] The invention belongs to the field of computer vision, and more specifically relates to a multi-view stereoscopic vision method combining spatial propagation and pixel-level optimization. Background technique [0002] Reconstructing dense 3D models of scenes from multiple images has been a hot topic in computer vision for the past few decades. The general technical process of this problem is very mature: input a set of pictures of a certain scene, first obtain the camera parameters through calibration or motion recovery structure (Structure From Motion, SfM); then use the estimated camera parameters to obtain the dense three-dimensional scene Point cloud, this technology is called multi-view stereo vision (Multi-View Stereo, MVS). The resulting dense point clouds can be used in many fields: classification, image-based rendering, localization, archaeological research, artistic creation, and more. Although the MVS algorithm has been studied for decades...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/55G06T7/80G06T5/50G06T17/00
CPCG06T5/50G06T7/55G06T7/80G06T17/00G06T2207/10028G06T2207/20221
Inventor 陶文兵黄文杰徐青山
Owner HUAZHONG UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products