Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Self-supervised monocular depth estimation method based on deep learning

A technology of depth estimation and deep learning, applied in the field of depth estimation and computer vision, can solve the problems of inaccurate parallax, insufficient exploration of geometric correlation, etc., and achieve the effect of improving accuracy

Active Publication Date: 2021-03-26
TIANJIN UNIV
View PDF9 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, existing methods usually only focus on utilizing synthesized target views to construct supervisory signals, and do not fully explore and exploit the geometric correlation between source views and synthesized target views.
Moreover, due to the occlusion between the source and target views, existing methods directly minimize the appearance difference between the synthesized target view and the real target view in disparity learning, which will lead to inaccurate predicted disparities near occluded regions.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Self-supervised monocular depth estimation method based on deep learning
  • Self-supervised monocular depth estimation method based on deep learning
  • Self-supervised monocular depth estimation method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0032] In order to make the purpose, technical solution and advantages of the present invention clearer, the implementation manners of the present invention will be further described in detail below.

[0033] The embodiment of the present invention provides a self-supervised monocular depth estimation method based on deep learning, see figure 1 , the method includes the following steps:

[0034] 1. Build a monocular depth estimation network

[0035] To the original right view I r , using a monocular depth estimation network from the right view I r Learning right-to-left disparity maps in D l . The monocular depth estimation network adopts an encoder-decoder network structure with skip connections. The encoder network uses ResNet50 to extract features from the right view, and the decoder network consists of continuous deconvolution and skip connections. The resolution of the feature maps is restored to the resolution of the input image. Obtain the disparity map D from the...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a self-supervision monocular depth estimation method based on deep learning, and the method comprises the steps: respectively extracting the pyramid features of an original right view Ir and a synthesized left view, carrying out the horizontal correlation operation of the pyramid features to obtain a multi-scale correlation feature Fc, and obtaining a completed multi-scalecorrelation feature Fm; sending the Fm to a visual clue prediction network in a binocular clue prediction module, generating an auxiliary visual clue Dr, reconstructing a right view from the synthesized left view, and optimizing the binocular clue prediction module through image reconstruction loss between the reconstructed right view and a real right view Ir; using a visual clue Dr generated by the binocular clue prediction module for constraining a disparity map D1 predicted by the monocular depth estimation network, and enhancing consistency between the monocular depth estimation network and the disparity map D1 by using consistency loss; and constructing constraints of occlusion guidance to allocate different weights for reconstruction errors of occluded area pixels and non-occluded area pixels.

Description

technical field [0001] The invention relates to the fields of computer vision and depth estimation, in particular to a self-supervised monocular depth estimation method based on deep learning. Background technique [0002] As one of the basic tasks of computer vision, depth perception can be widely used in areas such as autonomous driving, augmented reality, robot navigation, and 3D reconstruction. Although active sensors (e.g., lidar, structured light, and time-of-flight) have been widely exploited to directly acquire scene depth, active sensor devices are usually bulky, expensive, and have high energy consumption. In contrast, the method of predicting depth based on RGB (color) images has the advantages of low price and easy implementation. Among existing image-based depth estimation methods, monocular depth estimation does not rely on multiple acquisitions of the perceived environment, and has received extensive attention from researchers. [0003] In recent years, deep...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/55G06T5/50G06N3/04G06N3/08
CPCG06T7/55G06T5/50G06N3/04G06N3/08G06T2207/20221G06T2207/20228G06T2207/20081
Inventor 雷建军孙琳彭勃张哲刘秉正
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products