Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Self-Supervised Monocular Depth Estimation Method Based on Deep Learning

A technology of depth estimation and deep learning, which is applied in the field of depth estimation and computer vision, can solve the problems of inaccurate parallax, insufficient exploration of geometric correlation, etc., and achieve the effect of improving accuracy

Active Publication Date: 2022-06-28
TIANJIN UNIV
View PDF9 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, existing methods usually only focus on utilizing synthesized target views to construct supervisory signals, and do not fully explore and exploit the geometric correlation between source views and synthesized target views.
Moreover, due to the occlusion between the source and target views, existing methods directly minimize the appearance difference between the synthesized target view and the real target view in disparity learning, which will lead to inaccurate predicted disparities near occluded regions.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Self-Supervised Monocular Depth Estimation Method Based on Deep Learning
  • A Self-Supervised Monocular Depth Estimation Method Based on Deep Learning
  • A Self-Supervised Monocular Depth Estimation Method Based on Deep Learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0032] In order to make the objectives, technical solutions and advantages of the present invention clearer, the embodiments of the present invention are further described in detail below.

[0033] The embodiment of the present invention provides a self-supervised monocular depth estimation method based on deep learning, see figure 1 , the method includes the following steps:

[0034] 1. Building a monocular depth estimation network

[0035] On the original right view I r , using a monocular depth estimation network from the right view I r Learning right-to-left disparity maps in D l . The monocular depth estimation network adopts an encoder-decoder network structure with skip connections. The encoder network uses ResNet50 to extract features from the right view, and the decoder network consists of continuous deconvolution and skip connections to gradually The resolution of the feature map is restored to that of the input image. Disparity map D obtained from monocular de...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a self-supervised monocular depth estimation method based on deep learning. The method includes: extracting the original right view I respectively r and the pyramid features of the synthesized left view, and perform horizontal correlation operations on the pyramid features to obtain multi-scale correlation features F c , and obtain the improved multi-scale correlation feature F m ; put F m Send to the visual cue prediction network in the binocular cue prediction module to generate auxiliary visual cue D r , and reconstruct the right view from the synthesized left view using the reconstructed right view and the real right view I r The image reconstruction loss between to optimize the binocular cue prediction module; the visual cue D generated by the binocular cue prediction module r Disparity Map D for Constrained Monocular Depth Estimation Network Prediction l , use a consistency loss to enhance the consistency between the two; construct an occlusion-guided constraint to assign different weights to the reconstruction errors of pixels in occluded regions and pixels in non-occluded regions.

Description

technical field [0001] The invention relates to the fields of computer vision and depth estimation, in particular to a self-supervised monocular depth estimation method based on deep learning. Background technique [0002] As one of the basic tasks of computer vision, depth perception can be widely used in the fields of autonomous driving, augmented reality, robot navigation, and 3D reconstruction. Although active sensors (eg, lidar, structured light, and time-of-flight) have been widely utilized to directly acquire scene depth, active sensor devices are usually bulky, expensive, and have high energy consumption. In contrast, the depth prediction method based on RGB (color) images has the advantages of low price and easy implementation. Among existing image-based depth estimation methods, monocular depth estimation does not rely on multiple acquisitions of the perceptual environment, and has received extensive attention from researchers. [0003] In recent years, deep lear...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T7/55G06T5/50G06N3/04G06N3/08
CPCG06T7/55G06T5/50G06N3/04G06N3/08G06T2207/20221G06T2207/20228G06T2207/20081
Inventor 雷建军孙琳彭勃张哲刘秉正
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products