Monocular depth estimation method, apparatus, terminal, and storage medium

A depth estimation, single-purpose technology, applied in the field of image processing, can solve problems such as inability to obtain dense and accurate depth maps, inability to obtain dense depth maps, and no geometric measurement information, so as to solve the problem of low generalization ability and improve prediction accuracy. and the effect of predicting reliability

Active Publication Date: 2018-12-25
HISCENE INFORMATION TECH CO LTD
View PDF4 Cites 33 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] However, for the first estimation method, although a dense depth map can be predicted, because the neural network is limited to semantic understanding and there is no geometric measurement information, the accuracy of the depth map predicted by the neural network is poor. Lack of trustworthiness and low generalization ability of the network
For the second estimation method, a more accurate depth value can be obtained by solving multi-view geometry, but whether it is a SLAM system based on a feature point method such as ORB (Oriented FAST and Rotated BRIEF)-SLAM algorithm, or a SLAM based on a direct method Systems such as the large-scale monocular LSD (Large-scale Direct)-SLAM algorithm can only obtain depth information of a small number of feature points or high-gradient points, so that dense depth maps cannot be obtained, and only sparse or semi-dense depth maps can be obtained.
It can be seen that the existing monocular depth estimation method cannot obtain a dense and high-precision depth map

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Monocular depth estimation method, apparatus, terminal, and storage medium
  • Monocular depth estimation method, apparatus, terminal, and storage medium
  • Monocular depth estimation method, apparatus, terminal, and storage medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0028] figure 1 It is a flow chart of a monocular depth estimation method provided by Embodiment 1 of the present invention. This embodiment is applicable to the case of obtaining a high-precision dense depth map corresponding to a key image frame in a monocular video, especially for smartphones , drones, robots, autonomous driving technology or augmented reality technology for depth estimation of key image frames. The method can be performed by a monocular depth estimation device, which can be implemented by software and / or hardware, and integrated in a terminal that needs to estimate depth, such as drones, robots, smart phones, and the like. The method specifically includes the following steps:

[0029] S110. Obtain a monocular video.

[0030] Wherein, monocular video may refer to a series of image frames captured by a common camera. Exemplarily, the monocular video may be a series of RGB color image frames captured by an RGB (Red Green Blue) camera. The image frame sequ...

Embodiment 2

[0057] image 3 It is a flow chart of a monocular depth estimation method provided by Embodiment 2 of the present invention. This embodiment is applicable to the case of obtaining a high-precision dense depth map corresponding to a key image frame in a monocular video. The device includes: a monocular video An acquisition module 210 , a semi-dense depth map determination module 220 and a dense depth map determination module 230 .

[0058] Among them, the monocular video acquisition module 210 is used to acquire the monocular video; the semi-dense depth map determination module 220 is used to determine the semi-dense depth map corresponding to the key image frame in the monocular video according to the preset reconstruction algorithm; the dense depth map is determined Module 230, configured to use the key image frame and the semi-dense depth map as the input of the preset neural network model, and determine the dense depth map corresponding to the key image frame according to t...

Embodiment 3

[0079] Figure 4 It is a schematic structural diagram of a terminal provided in Embodiment 3 of the present invention. see Figure 4 , the terminal includes:

[0080] one or more processors 310;

[0081] memory 320, for storing one or more programs;

[0082] The input device 330 is used for collecting monocular video;

[0083] an output device 340, configured to display a dense depth map;

[0084] When one or more programs are executed by one or more processors 310, so that one or more processors 310 implement the monocular depth estimation method provided by the embodiment of the present invention, including:

[0085] Obtain monocular video;

[0086] Determine the semi-dense depth map corresponding to the key image frame in the monocular video according to a preset reconstruction algorithm;

[0087] The key image frame and the semi-dense depth map are used as an input of a preset neural network model, and a dense depth map corresponding to the key image frame is determ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The embodiment of the invention discloses a monocular depth estimation method, a device, a terminal and a storage medium. The method comprises the following steps: obtaining monocular video; determining a semi-dense depth map corresponding to key image frames in the monocular video according to a preset reconstruction algorithm; using the key image frame and the semi-dense depth map as inputs of apreset neural network model, and determining the dense depth map corresponding to the key image frame according to the output of the preset neural network model. The technical proposal of the embodiment of the invention effectively combines a preset reconstruction algorithm and a preset neural network model, thereby obtaining a dense and high-precision depth map.

Description

technical field [0001] Embodiments of the present invention relate to image processing technologies, and in particular to a monocular depth estimation method, device, terminal and storage medium. Background technique [0002] Depth estimation has important research significance in the fields of autonomous driving, robot obstacle avoidance, and augmented reality. The depth estimation method can be used to estimate the depth information of each pixel in an image to obtain a depth map of the image. In the prior art, depth information can be obtained directly through multiple sensors such as lidar and depth cameras, but these sensors have a certain volume and high cost, which limits the application range of depth estimation. It can be seen that if only one camera is used for depth estimation, the application scenario can be greatly simplified. [0003] Currently, there are two methods for depth estimation using a monocular camera. The first is a method of monocular depth esti...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/55G06N3/04
CPCG06T7/55G06T2207/10016G06N3/045
Inventor 不公告发明人
Owner HISCENE INFORMATION TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products