Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Monocular unsupervised depth estimation method based on context attention mechanism

A technology of depth estimation and attention, which is applied in the fields of image processing and computer vision, can solve problems such as the inability to guarantee the sharpness of depth edges and the integrity of the fine structure of depth maps, the inability to obtain long-distance correlations, and poor quality depth estimation maps. Achieve good scalability, easy construction, and fast operation

Active Publication Date: 2020-10-02
DALIAN UNIV OF TECH
View PDF2 Cites 16 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although the current unsupervised loss function is simple in form, its disadvantage is that it cannot guarantee the sharpness of depth edges and the integrity of the fine structure of the depth map, especially in occluded and low-textured areas, resulting in poor-quality depth estimation maps.
In addition, the current monocular depth estimation method based on deep learning usually cannot obtain the correlation between long-range features, so that better feature expression cannot be obtained, resulting in the loss of details in the estimated depth map.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Monocular unsupervised depth estimation method based on context attention mechanism
  • Monocular unsupervised depth estimation method based on context attention mechanism
  • Monocular unsupervised depth estimation method based on context attention mechanism

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0026] The present invention proposes a monocular unsupervised depth estimation method based on the contextual attention mechanism, which is described in detail in conjunction with the drawings and embodiments as follows:

[0027] The method comprises the steps of;

[0028] 1) Prepare initial data:

[0029] 1-1) Evaluate the invention using two public datasets, KITTI dataset and Make3D dataset;

[0030] 1-2) The KITTI data set is used for training and testing of the method of the present invention. It has a total of 40,000 training samples, 4,000 verification samples, and 697 test samples. During training, the original image resolution is scaled from 375×1242 to 128×416. The length of the input image sequence is set to 3 during network training, and the middle Frames are target views, other frames are source views.

[0031] 1-3) The Make3D data set is mainly used to test the generalization performance of the present invention on different data sets. The Make3D dataset has ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a monocular unsupervised depth estimation method based on a context attention mechanism, and belongs to the field of image processing and computer vision. According to the method, a depth estimation method based on a hybrid geometry enhancement loss function and a context attention mechanism is adopted, and a depth estimation sub-network, an edge sub-network and a camera attitude estimation sub-network based on a convolutional neural network are adopted to obtain a high-quality depth map. The system is easy to construct, and a corresponding high-quality depth map is obtained from a monocular video in an end-to-end mode by using a convolutional neural network; the program framework is easy to realize. According to the method, the depth information is solved by usingan unsupervised method, the problem that real data is difficult to obtain in a supervised method is avoided, and the algorithm operation speed is high. According to the method, the depth information is solved by using the monocular video, namely the monocular picture sequence, so the problem that the stereoscopic picture pair is difficult to obtain when the monocular picture depth information is solved by using the stereoscopic picture pair is avoided.

Description

technical field [0001] The invention belongs to the field of image processing and computer vision, and relates to a depth estimation sub-network based on a convolutional neural network, an edge sub-network and a camera pose estimation sub-network to jointly obtain a high-quality depth map. Specifically, it involves a monocular unsupervised depth estimation method based on a contextual attention mechanism. Background technique [0002] At this stage, depth estimation, as a basic research task in the field of computer vision, has a wide range of applications in the fields of object detection, automatic driving, and simultaneous localization and map construction. Depth Estimation Especially Monocular Depth Estimation In the absence of geometric constraints and other prior knowledge, predicting a depth map from a single image is an extremely ill-posed problem. So far, deep learning-based monocular depth estimation methods are mainly divided into two categories: supervised metho...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/55G06T7/564G06T7/529
CPCG06T7/55G06T7/564G06T7/529G06T2207/10016G06T2207/20081G06T2207/20084G06T7/50G06T2207/10024G06T2207/20016G06T7/74G06N3/088G06V10/454G06V10/82G06N3/048G06N3/045G06T7/75G06N3/04G06N3/08G06T9/002G06F18/2132G06F18/2193
Inventor 叶昕辰徐睿樊鑫张明亮
Owner DALIAN UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products