Unsupervised convolutional neural network-based monocular scene depth estimation method

A convolutional neural network and scene depth technology, applied in the field of scene depth estimation, can solve the problems that the depth sensor cannot obtain image depth information, manual data labeling is difficult, and the results are not accurate enough.

Active Publication Date: 2019-11-26
DALIAN MARITIME UNIVERSITY
View PDF3 Cites 22 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

When making a depth image data set, due to various external conditions, such as lighting, weather changes, etc., the depth sensor cannot obtain reliable and accurate image depth information, which may give the depth estimation model the accuracy of the estimated result. Influence, and this method of supervised learning has caused the problem of manual data labeling
On the other hand, as the number of layers of the convolutional neural network deepens, it may cause the problem of gradient disappearance, which brings a certain degree of difficulty to the training of the network, resulting in inaccurate results.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Unsupervised convolutional neural network-based monocular scene depth estimation method
  • Unsupervised convolutional neural network-based monocular scene depth estimation method
  • Unsupervised convolutional neural network-based monocular scene depth estimation method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0069] The present invention will be further described below in conjunction with the accompanying drawings. according to figure 1 The flow chart shown in the figure processes the scene image. First, the camera is used to shoot a video of the scene to be processed, and images of consecutive frames are selected, such as figure 2 As shown, this picture is used as the original image for scene depth estimation in the present invention. According to steps A, B, and C of the present invention, the mapping from the original image to the depth map is realized by using the idea of ​​view synthesis, and the loss function L based on the unsupervised residual convolutional neural network scene depth estimation model is obtained VS As shown in the formula (6), then obtain the feature expression D of the target image according to the step D of the present invention ij (d p , d j ), using the output of these two parts, a scene depth estimation model based on unsupervised conditional ran...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an unsupervised convolutional neural network-based monocular scene depth estimation method. The method comprises the following steps of obtaining a depth value of each pixel point of a target image; obtaining a camera pose value when the pixel coordinates on the target image are transferred to the next frame of image; constructing a loss function; and performing depth estimation based on an unsupervised conditional random field residual convolutional neural network scene. The problem that manual data labeling is difficult is well solved through an unsupervised method, manpower is saved, and economic benefits are improved. According to the invention, a linear chain conditional random field thought is adopted to realize feature expression of an original image. An unsupervised conditional random field residual convolutional neural network scene depth estimation model is formed by combining an unsupervised residual convolutional neural network scene depth estimationmodel. The model provided by the invention is superior to other three models in average relative error (rel) and accuracy (acc).

Description

technical field [0001] The invention relates to a method for estimating scene depth, in particular to a method for estimating monocular scene depth based on a non-supervised convolutional neural network. Background technique [0002] Computer vision is mainly a simulation of biological vision through computers and related visual sensors. People first use a camera to obtain external images, and then use a computer to convert the image into a digital signal, realizing the digital processing of the image, and finally a new discipline - computer vision, which involves many application fields including target tracking, image Classification, face recognition, scene understanding, etc. The goal of computer vision research is to enable computers to have the ability to observe the environment, understand the environment, and adapt to the environment autonomously, just like humans. [0003] However, most of the current computer vision technologies are for digital image processing. S...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/50G06N3/04
CPCG06T7/50G06N3/045Y02T10/40
Inventor 刘洪波岳晓彤江同棒张博马茜王乃尧杨丽平林正奎
Owner DALIAN MARITIME UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products