Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Unsupervised monocular depth estimation method based on generative adversarial network

A depth estimation and unsupervised technology, applied in the field of robot vision, can solve the problems of high sensor cost and inaccurate camera pose estimation, and achieve the effect of improving accuracy and image generation quality

Pending Publication Date: 2019-11-12
NORTHEASTERN UNIV
View PDF4 Cites 18 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] The purpose of the present invention is to provide an unsupervised monocular depth estimation method based on generative confrontation networks, which can solve the problems of high cost of current depth estimation sensors and inaccurate camera pose estimation, and at the same time perform environment perception and three-dimensional scene reconstruction for the camera , providing the basis for autonomous driving and other related tasks

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Unsupervised monocular depth estimation method based on generative adversarial network
  • Unsupervised monocular depth estimation method based on generative adversarial network
  • Unsupervised monocular depth estimation method based on generative adversarial network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0052] Such as figure 1 As shown, an unsupervised monocular depth estimation method based on generating confrontation network of the present invention includes the following steps:

[0053] Step 1: Acquire the left and right image pairs with strict time synchronization through the binocular camera, establish a binocular color image dataset, and correct the binocular color image;

[0054] Step 2: Establish an unsupervised generative confrontation network model, input the corrected binocular color image into the network, and perform training and iterative regression on the network model;

[0055] Step 3: Input the monocular color image into the trained network model to generate the corresponding disparity map;

[0056] The unsupervised generation confrontation network model established by the present invention includes a generator and a discriminator, the generator uses a ResNet50 network with a residual mechanism, and the discriminator uses a VGG-16 network. The generator inc...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an unsupervised monocular depth estimation method based on a generative adversarial network, and the method comprises the following steps: 1, obtaining a left and right image pair with strict time synchronization through a binocular camera, building a binocular color image data set, and correcting a binocular color image; 2, establishing an unsupervised generative adversarial network model, inputting the corrected binocular color image into the network, and performing training and iterative regression on the network model; 3, inputting the monocular color image into thetrained network model to generate a disparity map corresponding to the monocular color image; and 4, converting the disparity map into depth information through a binocular disparity depth conversionformula, and synthesizing a depth map. According to the depth estimation method provided by the invention, the monocular color image is converted into the depth map containing the depth information by using the unsupervised network model, and complex real depth data is not needed.

Description

technical field [0001] The invention belongs to the technical field of robot vision, and relates to an unsupervised monocular depth estimation method based on a generative confrontation network. Background technique [0002] Depth information is the core issue in the fields of visual SLAM, 3D scene reconstruction, and medical imaging in computer vision. In the field of robotics, accurate depth estimation is very important for computer vision to understand the three-dimensional environment for machine motion planning, navigation positioning, motion obstacle avoidance, and control decision-making. [0003] Generally speaking, there are two main methods for depth estimation: direct measurement by 3D measurement sensors and depth restoration of image information. Three-dimensional measurement sensors mainly rely on various direct measurement sensors such as Velodyne designed lidar to create a three-dimensional environment by emitting laser beams at a certain frequency to scan t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/593G06N3/04
CPCG06T7/593G06T2207/10012G06T2207/20081G06N3/045
Inventor 房立金赵乾坤万应才
Owner NORTHEASTERN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products