Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Direct method unsupervised monocular image scene depth estimation method

A technology of depth estimation and scene depth, which is applied in the fields of neural network, depth estimation, and computer vision, can solve problems such as the inability to eliminate external parameter variation errors, and achieve the effects of overcoming the strong dependence of radar sensors, wide application range, and improving accuracy

Active Publication Date: 2020-12-15
SHANDONG UNIV OF SCI & TECH
View PDF3 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The traditional depth estimation method is mainly based on the geometric constraints of feature point matching and environmental assumptions. The binocular method and the multi-eye depth estimation method require precise camera extrinsic parameters, and errors caused by extrinsic parameter changes cannot be eliminated.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Direct method unsupervised monocular image scene depth estimation method
  • Direct method unsupervised monocular image scene depth estimation method
  • Direct method unsupervised monocular image scene depth estimation method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0035] Below in conjunction with accompanying drawing and specific embodiment the present invention is described in further detail:

[0036] Such as figure 1 As shown, a direct method for unsupervised monocular image scene depth estimation method, including:

[0037] Step 1: The constructed depth estimation neural network is a fully convolutional U-shaped, including a convolutional part and a deconvolutional part. The depth estimation neural network takes the monocular continuous image as the input image, and uses the depth estimation neural network to output the depth estimation image.

[0038] Step 1 includes the following sub-steps:

[0039] Step 1.1: The convolution part uses the convolutional network in the Res-Net18 network structure as the main structure network. The convolution part is composed of several convolution blocks. The previous convolution block and the next convolution block are connected through the maximum pool operation. Each convolution block contains...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a direct method unsupervised monocular image scene depth estimation method, which belongs to the field of computer vision and depth estimation, and comprises the following steps of: constructing a neural network; calculating an image re-projection error; and calculating an image mask and updating the pose of the camera. According to the method, the defects that monocular image depth estimation is high in environment requirement, prone to being interfered by a low texture area, poor in camera pose estimation precision and the like are overcome. A traditional monocular depth estimation problem is combined with a visual odometer, so that the depth estimation precision is remarkably improved, and the method has an auxiliary effect on positioning and navigation of a moving vehicle. The method has the advantages of being high in precision, high in flexibility, wide in application range and the like, can be used for surrounding environment perception, collision prevention and positioning navigation of equipment such as automatic driving vehicles and mobile robots, and has various application scenes.

Description

technical field [0001] The invention belongs to the field of computer vision, neural network, and depth estimation, and more specifically, relates to a direct-method unsupervised monocular image scene depth estimation method. Background technique [0002] At present, depth estimation has been developed rapidly driven by related technologies such as neural networks and sensors, and has been widely used in the fields of intelligent robots, pedestrian recognition, face unlocking, VR applications, and automatic driving. The primary task of depth estimation is to estimate the distance from the front object to the camera based on a single color image collected by the camera. [0003] There are two main ways to obtain the 3D information corresponding to the scene from the real scene: one is to use a sensor that can perceive the 3D depth information of the scene to collect the depth information in the scene, and the other is to recover the 3D information from the 2D image correspond...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/55G06T7/70G06N3/04
CPCG06T7/55G06T7/70G06N3/045
Inventor 张治国孙业昊孙浩然王海霞卢晓盛春阳李玉霞
Owner SHANDONG UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products