Unlock instant, AI-driven research and patent intelligence for your innovation.

A Method of Image Depth Estimation Based on Generative Adversarial Networks

An image depth and image technology, which is applied in the field of 3D reconstruction in computer vision, can solve the problems of low accuracy of monocular image depth estimation, high hardware equipment requirements, and inability to accurately estimate the depth of monocular images, etc.

Active Publication Date: 2021-08-24
OCEAN UNIV OF CHINA
View PDF8 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The present invention provides an image depth estimation method based on a generative confrontation network to solve the technical problems of low accuracy of depth estimation for monocular images, high requirements for hardware equipment, and inability to accurately estimate the depth of monocular images of different scales in the same scene. This depth estimation method converts the monocular scene image into a depth map containing distance information, and then provides a basis for the research of 3D scene reconstruction.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Method of Image Depth Estimation Based on Generative Adversarial Networks
  • A Method of Image Depth Estimation Based on Generative Adversarial Networks
  • A Method of Image Depth Estimation Based on Generative Adversarial Networks

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0039] The embodiments of the present application are preferred embodiments of the present application.

[0040] An image depth estimation method based on generative confrontation network, which uses a small number of paired monocular scene images and corresponding depth map images containing depth information, and converts monocular scene images into depth information containing scene depth information through supervised deep learning methods. image, the method includes the following steps:

[0041] First, use devices that can obtain depth information images, such as Kinect units (somatosensory game devices) or lidar to collect clear RGB-D images (RGB-D images include color images and corresponding depth map images), and construct scene RGB-D Image dataset, where the color images in the RGB-D image dataset are used as monocular scene images. Then perform rotation, scale transformation, cropping, and color change operations on the scene RGB-D image pair, in order to enhance t...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention provides a method for estimating image depth based on a generative confrontation network, which collects scene RGB-D images and constructs scene RGB-D image data sets, wherein the color images in the RGB-D image data sets are used as monocular scene images; Construct a model based on generating adversarial networks, input the monocular scene image into the network model, and convert the monocular scene image into a final synthesized depth map image through training and iterative feedback. The depth estimation method provided by the present invention converts a monocular scene image into a depth map image containing distance information, thereby providing a basis for the research on three-dimensional scene reconstruction.

Description

technical field [0001] The invention relates to the technical field of three-dimensional reconstruction in computer vision, in particular to an image depth estimation method based on a generation confrontation network. Background technique [0002] Distance information is the basis of research in some fields such as 3D scene reconstruction in computer vision. If the three-dimensional structure of the scene can be accurately inferred from the scene image, humans and computers can understand the three-dimensional relationship between objects in the image, so as to better understand the scene, and it will also greatly promote various applications in the field of computer vision developments, such as 3D movie production, robot navigation, unmanned driving, etc. [0003] The traditional visual algorithms for scene depth estimation are generally binocular or multi-eye, mainly based on optical geometric constraints, such as stereo image matching, SfM, etc. In addition, there are ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T7/50
CPCG06T2207/10028G06T2207/20081G06T2207/20084G06T7/50
Inventor 俞智斌张少永郑海永郑冰
Owner OCEAN UNIV OF CHINA