Unsupervised monocular view depth estimation method based on multi-scale unification

A depth estimation, multi-scale technology, applied in the field of image processing, can solve the problems of lack of depth map texture, affecting depth estimation accuracy, holes, etc., to improve the depth map holes, solve the depth map holes, and reduce the difficulty of acquisition.

Pending Publication Date: 2020-06-23
NANJING UNIV OF AERONAUTICS & ASTRONAUTICS
View PDF3 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

People use the powerful learning ability of the neural network to design various models to fully mine the connection between the original image and the depth image, so as to train the scene depth that can be predicted according to the input image, but as mentioned above, the real depth information of the scene It is very rare, which means that we need to break away from the real depth label of the scene and use an unsupervised method to complete the depth estimation task
One of the unsupervised methods is to use the timing information of monocular video as the supervisory signal, but this kind of unsupervised depth estimation method uses the video information collected during the motion, so the camera itself has motion, and the image sequence The rel

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Unsupervised monocular view depth estimation method based on multi-scale unification
  • Unsupervised monocular view depth estimation method based on multi-scale unification
  • Unsupervised monocular view depth estimation method based on multi-scale unification

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0043] The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some, not all, embodiments of the present invention.

[0044] refer to Figure 1-6 , an unsupervised monocular depth estimation method based on multi-scale unification, in which the unsupervised depth monocular depth estimation network model is carried out on the desktop workstation of this laboratory, the graphics card uses NVIDIA GeForceGTX 1080Ti, and the training system is Ubuntu14.04. TensorFlow 1.4.0 is used as the framework to build the platform; training is carried out on the classic driving data set KITTI 2015 stereo data set.

[0045] Such as figure 1 As shown, an unsupervised monocular view depth estimation method based on multi-scale unity of the present invention specifically includes the following steps: ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention belongs to the technical field of image processing, and discloses an unsupervised monocular view depth estimation method based on multi-scale unification, and the method comprises the following steps: S1, carrying out the pyramid multi-scale processing of an input stereo image pair; s2, constructing a network framework for encoding and decoding; s3, transmitting the features extracted in the encoding stage to a reverse convolutional neural network to realize feature extraction of input images of different scales; s4, performing unified up-sampling on the disparity maps of different scales to an original input size; s5, performing image reconstruction by using the input original image and the corresponding disparity map; s6, constraining the accuracy of image reconstruction; s7, training a network model by adopting a gradient descent method; and S8, fitting a corresponding disparity map according to the input image and a pre-training model. According to the method, networktraining does not need to be supervised by using real depth data, the binocular image which is easy to obtain is used as a training sample, the obtaining difficulty of network training is greatly reduced, and the problem of depth map holes caused by low-scale disparity map blurring is solved.

Description

technical field [0001] The invention relates to the technical field of image processing, in particular to an unsupervised monocular view depth estimation method based on multi-scale unification. Background technique [0002] With the development of science and technology and the explosive growth of information, people's attention to image scenes is gradually shifting from two-dimensional to three-dimensional. The three-dimensional information of objects has played a great role in daily life. Among them, three-dimensional information is the most widely used The most important thing is the assisted driving system for driving scenes. Due to the rich information contained in the image, the vision sensor covers almost all relevant information required for driving, including but not limited to lane geometry, traffic signs, lights, object position and speed, etc. Among all forms of visual information, depth information plays a very important role in driver assistance systems. For...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T7/50G06N3/04G06N3/08
CPCG06T7/50G06N3/08G06T2207/20228G06T2207/10012G06T2207/20016G06N3/045
Inventor 丁萌姜欣言曹云峰李旭张振振
Owner NANJING UNIV OF AERONAUTICS & ASTRONAUTICS
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products