Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A method and system for image depth estimation based on CNN and depth filter

An image depth and depth estimation technology, applied in the field of 3D vision, can solve problems such as pure rotation cannot be calculated, absolute scale loss, time-consuming, etc., to achieve the effect of overcoming object edge blur, overcoming absolute scale loss, and reducing the number of iterations required

Inactive Publication Date: 2019-11-15
CHINA UNIV OF GEOSCIENCES (WUHAN)
View PDF15 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The disadvantage is that due to its computational complexity, dense depth reconstruction is generally done offline, which takes a long time
Moreover, the motion recovery structure method has strong flaws - the loss of absolute scale and the problem that pure rotation cannot be calculated
[0005] It can be seen that due to various reasons, the existing image depth estimation technology still has problems such as narrow application range, low precision, and low efficiency.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A method and system for image depth estimation based on CNN and depth filter
  • A method and system for image depth estimation based on CNN and depth filter
  • A method and system for image depth estimation based on CNN and depth filter

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0040] In order to make the purpose, technical solution and advantages of the present invention more clear, the present invention will be further described in detail below in conjunction with the accompanying drawings and examples.

[0041] The specific process of image depth estimation method based on CNN and depth filter is as follows: figure 1 As shown, the method is divided into four parts: obtaining image depth estimates, pose estimation, pose optimization, and depth image acquisition.

[0042] 1. Obtain image depth estimates

[0043] Obtain multiple color images continuously shot by the camera on the same shooting target, select one of the color images as a reference image, and the remaining color images as associated images, and obtain the depth estimation value corresponding to each pixel of the reference image through CNN.

[0044] 2. Pose estimation

[0045] In the pose (camera translation and rotation) estimation stage, set the image collected at time k as I k , ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an image depth estimation method and system based on CNN and a depth filter. The method includes: obtaining an image depth estimation value based on CNN, extracting local feature points from the image, and establishing a minimum photometric error equation for the local feature points to solve Camera relative pose; optimize camera pose based on epipolar search and matching feature points, select image blocks centered on feature points, search for image block epipolar lines to get the best match, and construct beam adjustment equations to optimize relative poses based on the best match Pose: Depth value filtering based on camera pose, using Gaussian fusion to filter the depth value until the depth value converges; the invention overcomes the problem of inaccurate image depth and the problem of absolute scale loss of monocular vision, and can be used for 3D scene reconstruction , indoor positioning and augmented reality and many other fields.

Description

technical field [0001] The invention belongs to the field of three-dimensional vision, and in particular relates to an image depth estimation method and system based on a CNN (convolutional neural network) and a depth filter. Background technique [0002] Most common images in reality are color images. Color images are obtained by compressing a three-dimensional scene into a two-dimensional plane. The depth information is lost during the imaging process, and the loss of depth information makes many visual tasks difficult. For example, Due to the lack of depth values, the reconstruction of 3D scenes will be difficult. Therefore, it is of great significance to recover image depth values ​​from color images. At present, the mainstream image depth acquisition methods are divided into three categories. One is to obtain the depth value through special hardware equipment, mainly RGB-D cameras, and its principle is generally structured light or time-of-flight method. Used in robot...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T7/50G06N3/04
CPCG06T7/50G06T2207/20024G06T2207/10012G06N3/045
Inventor 金星姚志文张晶晶
Owner CHINA UNIV OF GEOSCIENCES (WUHAN)
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products