Method for joint estimation of scene depth and semantics by single image

A joint estimation and single image technology, applied in the field of estimating the depth information and semantic information of the scene, can solve the problems of poor effect, large amount of calculation, expensive, etc., and achieve the effect of simple implementation and good scalability

Active Publication Date: 2019-08-13
TIANJIN UNIV
View PDF4 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

For example, 3D LIDAR equipment is very expensive; depth cameras based on structured light, such as Kinect, cannot be used outdoors, and the measurement distance is limited and the depth image noise is relatively large; binocular cameras need to use stereo matching algorithms, which require a large amount of calculation and are not rich in textures. scene doesn't work well

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for joint estimation of scene depth and semantics by single image
  • Method for joint estimation of scene depth and semantics by single image
  • Method for joint estimation of scene depth and semantics by single image

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0023] The invention aims to realize the purpose of depth estimation and semantic segmentation estimation based only on color pictures. The present invention takes any device capable of collecting color pictures as a starting point, and obtains a depth map and a semantic map by using iterative network learning.

[0024] The present invention proposes a method for jointly estimating depth and semantic information through an iterative network, which is described in detail with reference to the drawings and embodiments as follows:

[0025] The present invention utilizes a color image obtained by a certain device and inputs it into a depth estimation and semantic segmentation iterative network for joint optimization to obtain a depth map and a semantic segmentation map corresponding to the image. Such as figure 1 As shown, it is the iterative network design proposed in the embodiment of the present invention. The iterative network is a multi-task deep convolutional network. It mainly i...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention belongs to the field of computer vision and computer graphics. An iterative network is designed to jointly estimate depth information and semantic information, and mutual prediction results are improved by utilizing complementary characteristics between the depth information and semantic information. According to the technical scheme, the method for jointly estimating the scene depthand the semantics through the single image comprises the following steps: performing photographing through any device with a monocular camera, and using an obtained color image as input of a network;and 2) iteration network: inputting a color image into a framework formed by a multi-task depth convolution network and used for depth estimation and semantic segmentation iteration combined optimization, estimating depth and semantic information of the color image, using the depth information to reconstruct a three-dimensional scene, and using the semantic information to realize understanding ofthe scene. The method is mainly applied to an image processing occasion.

Description

Technical field [0001] The invention belongs to the field of computer vision and computer graphics, and particularly relates to using a deep learning method to estimate the depth information and semantic information of a scene. Background technique [0002] In the field of computer vision, monocular depth estimation has always been a long-standing and discussed topic. Depth information is of great help in applications such as 3D reconstruction, virtual reality, and navigation. Nowadays, although there are many hardwares that can directly obtain the depth map, they all have their own shortcomings. For example, 3D LIDAR equipment is very expensive; structured light-based depth camera like Kinect, etc. cannot be used outdoors, the measurement distance is limited and the depth map noise is relatively large; the binocular camera needs to use a stereo matching algorithm, which requires a large amount of calculation and is not rich in texture The scene effect is not good. The monocul...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/11G06K9/62G06N3/04G06N3/08G06T7/50
CPCG06T7/11G06T7/50G06N3/08G06T2207/10024G06N3/044G06N3/045G06F18/253Y02T10/40
Inventor 杨敬钰徐吉李坤岳焕景
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products