Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A binocular depth estimation method based on depth neural network

A deep neural network and depth estimation technology, applied in the field of multimedia image processing, can solve problems such as inaccurate depth values, difficult calibration of hardware equipment, and sparse depth information, so as to reduce information loss, ensure accuracy, improve accuracy and robustness. sticky effect

Active Publication Date: 2019-02-22
浙江七巧连云生物传感技术股份有限公司
View PDF6 Cites 35 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] However, supervised learning usually relies too much on the real value, but the real value may have errors and noise, the depth information is relatively sparse, and the hardware device is difficult to calibrate, which makes the estimated depth value inaccurate.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A binocular depth estimation method based on depth neural network
  • A binocular depth estimation method based on depth neural network
  • A binocular depth estimation method based on depth neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0036] The present invention will be further described in detail below in conjunction with the accompanying drawings and through specific embodiments. The following embodiments are only descriptive, not restrictive, and cannot limit the protection scope of the present invention.

[0037] 1) Carry out corresponding image preprocessing such as cutting and transforming the input left and right viewpoint images for data enhancement.

[0038]The present invention adopts images of left and right angles of view acquired by a binocular camera as network input, and can output a monocular depth map in a left camera coordinate system or a right camera coordinate system. For the convenience of description, the output monocular depth map mentioned in this article is the depth map of the left picture. The input image in the present invention requires RGB images of left and right perspectives, so part of the data in the artificially synthesized dataset SceneFlow and the KITTI2015 dataset in ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a binocular depth estimation method based on a depth neural network, which comprises the following steps: 1) preprocessing the input left and right viewpoint images to enhancedata; 2) constructing a multi-scale network model for binocular depth estimation, wherein the model comprises a plurality of convolution layers, an activation layer, a residual connection, a multi-scale pooling connection and a linear upsampling layer; 3) Designing the loss function to minimize the results in the continuous training process, so as to obtain the optimal network weights; 4) inputting the image to be processed into the network model to obtain the corresponding depth map, and repeatedly repeating the above steps until the network converges or reaches the training times. The invention adopts the idea of unsupervised learning, and only the left and right viewpoint images obtained by the binocular camera are used as network input. The adaptive design of the network sets the internal and external parameters of the camera as a single model parameter, so it can be applied to multiple camera systems without modifying the network.

Description

technical field [0001] The invention belongs to the field of multimedia image processing, relates to computer vision and deep learning technology, and is a binocular depth estimation method based on a deep neural network. technical background [0002] Depth estimation has always been a popular research direction in the field of computer vision. The 3D data provided by the depth map provides the required information for the realization of 3D reconstruction, augmented reality (Augmented Reality, AR), intelligent navigation and other applications. At the same time, the positional relationship expressed by the depth map is extremely important in many image tasks, which can further simplify the image processing algorithm. At present, the more common depth estimation is mainly divided into two categories, namely, monocular depth estimation and binocular depth estimation. [0003] The monocular depth estimation method only uses one camera. In the traditional algorithm, the camera ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/80G06T3/40G06N3/04
CPCG06T3/4007G06T7/80G06N3/045
Inventor 侯永宏吕晓冬许贤哲陈艳芳赵健
Owner 浙江七巧连云生物传感技术股份有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products