Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A direct method for scene depth estimation in unsupervised monocular images

A technology for depth estimation and scene depth, which is applied in the fields of depth estimation, neural network, and computer vision. It can solve problems such as the inability to eliminate external parameter variation errors, achieve the effects of reducing requirements and training costs, overcoming void problems, and high precision.

Active Publication Date: 2022-07-19
SHANDONG UNIV OF SCI & TECH
View PDF3 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The traditional depth estimation method is mainly based on the geometric constraints of feature point matching and environmental assumptions. The binocular method and the multi-eye depth estimation method require precise camera extrinsic parameters, and errors caused by extrinsic parameter changes cannot be eliminated.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A direct method for scene depth estimation in unsupervised monocular images
  • A direct method for scene depth estimation in unsupervised monocular images
  • A direct method for scene depth estimation in unsupervised monocular images

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0035] The present invention is described in further detail below in conjunction with the accompanying drawings and specific embodiments:

[0036] like figure 1 shown, a direct method for unsupervised scene depth estimation from monocular images, including:

[0037] Step 1: The constructed depth estimation neural network is a fully convolutional U-shaped, including a convolution part and a deconvolution part. The depth estimation neural network takes the monocular continuous image as the input image, and uses the depth estimation neural network to output the depth estimation image.

[0038] Step 1 includes the following sub-steps:

[0039] Step 1.1: The convolutional part uses the convolutional network in the Res-Net18 network structure as the main structure network. The convolution part consists of several convolution blocks. The previous convolution block and the next convolution block are connected by a maximum pool operation. Each convolution block contains several conv...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a direct method unsupervised monocular image scene depth estimation method, belonging to the field of computer vision and depth estimation, comprising the following steps: constructing a neural network; image reprojection error calculation; image mask calculation and camera pose update. The method of the invention overcomes the defects such as high requirements on the environment for monocular image depth estimation, easy interference from low-texture areas, poor camera pose estimation accuracy, etc. The depth estimation accuracy is improved, and it also has an auxiliary role for the positioning and navigation of mobile vehicles; the invention has many advantages such as high accuracy, strong flexibility, wide application range, etc., and can be used for automatic driving vehicles, mobile robots and other equipment. There are various application scenarios for collision and positioning navigation.

Description

technical field [0001] The invention belongs to the fields of computer vision, neural network and depth estimation, and more particularly, relates to a direct method unsupervised monocular image scene depth estimation method. Background technique [0002] At present, depth estimation has developed rapidly, driven by related technologies such as neural networks and sensors, and has been widely used in intelligent robots, pedestrian recognition, face unlocking, VR applications, and autonomous driving. The primary task of depth estimation is to estimate the distance from the object ahead to the camera based on a single color image captured by the camera. [0003] There are two main ways to obtain the three-dimensional information corresponding to the scene from the real scene: one is to use a sensor that can perceive the three-dimensional depth information of the scene to collect the depth information in the scene, and the other is to restore the three-dimensional image from th...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T7/55G06T7/70G06N3/04
CPCG06T7/55G06T7/70G06N3/045
Inventor 张治国孙业昊孙浩然王海霞卢晓盛春阳李玉霞
Owner SHANDONG UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products