Image depth estimation method and system based on CNN (Convolutional Neural Network) and depth filter

An image depth and depth estimation technology, applied in the field of 3D vision, can solve the problems of absolute scale loss, pure rotation cannot be calculated, and the scope of application is narrow, so as to reduce the number of iterations required, overcome absolute scale loss, and overcome object edge blur. Effect

Inactive Publication Date: 2018-10-02
CHINA UNIV OF GEOSCIENCES (WUHAN)
View PDF15 Cites 17 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The disadvantage is that due to its computational complexity, dense depth reconstruction is generally done offline, which takes a long time
Moreover, the motion recovery structure method has strong flaws - the loss of absolute scale a...

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image depth estimation method and system based on CNN (Convolutional Neural Network) and depth filter
  • Image depth estimation method and system based on CNN (Convolutional Neural Network) and depth filter
  • Image depth estimation method and system based on CNN (Convolutional Neural Network) and depth filter

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0040] In order to make the purpose, technical solution and advantages of the present invention more clear, the present invention will be further described in detail below in conjunction with the accompanying drawings and examples.

[0041] The specific process of image depth estimation method based on CNN and depth filter is as follows: figure 1 As shown, the method is divided into four parts: obtaining image depth estimates, pose estimation, pose optimization, and depth image acquisition.

[0042] 1. Obtain image depth estimates

[0043] Obtain multiple color images continuously shot by the camera on the same shooting target, select one of the color images as a reference image, and the remaining color images as associated images, and obtain the depth estimation value corresponding to each pixel of the reference image through CNN.

[0044] 2. Pose estimation

[0045] In the pose (camera translation and rotation) estimation stage, set the image collected at time k as I k , ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an image depth estimation method and system based on a CNN (Convolutional Neural Network) and a depth filter. The image depth estimation method comprises the steps of obtainingimage depth estimation values based on the CNN, extracting local features points from the image, and establishing a minimum photometric error equation to solve the relative pose of a camera; optimizing the camera pose based on the feature points matched by polar line searching, selecting an image block by taking the feature points as a center, performing polar line searching on the image block toobtain the optimum matching, constructing a bundle adjustment equation according to the optimum matching to optimize the relative pose; and performing depth value filtering based on the camera pose,namely, filtering depth values by adopting Gaussian integration until the depth values converge. The image depth estimation method overcomes a problem of inaccurate image depth and a problem of absolute scale loss of monocular vision, and can be applied to many fields such as three-dimensional scene reconstruction, indoor positioning and augmented reality.

Description

technical field [0001] The invention belongs to the field of three-dimensional vision, and in particular relates to an image depth estimation method and system based on a CNN (convolutional neural network) and a depth filter. Background technique [0002] Most common images in reality are color images. Color images are obtained by compressing a three-dimensional scene into a two-dimensional plane. The depth information is lost during the imaging process, and the loss of depth information makes many visual tasks difficult. For example, Due to the lack of depth values, the reconstruction of 3D scenes will be difficult. Therefore, it is of great significance to recover image depth values ​​from color images. At present, the mainstream image depth acquisition methods are divided into three categories. One is to obtain the depth value through special hardware equipment, mainly RGB-D cameras, and its principle is generally structured light or time-of-flight method. Used in robot...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T7/50G06N3/04
CPCG06T7/50G06T2207/20024G06T2207/10012G06N3/045
Inventor 金星姚志文张晶晶
Owner CHINA UNIV OF GEOSCIENCES (WUHAN)
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products