Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Joint Depth estimation and semantic segmentation from a single image

A semantic and deep technology, applied in the field of joint depth estimation and semantic annotation, can solve the problems of error propagation and lack of accuracy

Active Publication Date: 2016-12-07
ADOBE SYST INC
View PDF6 Cites 25 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, conventional ways to perform semantic annotation are usually addressed separately or sequentially from depth estimation using different and unrelated techniques, lack accuracy and may cause errors developed in early stages in the technique's execution to be propagated later stage

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Joint Depth estimation and semantic segmentation from a single image
  • Joint Depth estimation and semantic segmentation from a single image
  • Joint Depth estimation and semantic segmentation from a single image

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0025] overview

[0026] Semantic segmentation and depth estimation are two fundamental problems in image understanding. Although the two tasks have been found to be strongly related and mutually beneficial, these problems are conventionally addressed separately or sequentially using different techniques, which creates inconsistencies, errors and inaccuracies.

[0027] In the following, we observe the complementary effects from typical failure cases of the two tasks that lead to the description of a unified coarse-to-fine framework for joint semantic segmentation and depth estimation usable on a single image. For example, a framework is proposed that first predicts a coarse global model composed of semantic labels and depth values ​​(e.g., absolute depth values) by machine learning to represent the overall context of an image. Semantic labels describe "what" is represented by corresponding pixels in the image, e.g., sky, plants, ground, walls, buildings, etc. The depth val...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

Joint depth estimation and semantic labeling techniques usable for processing of a single image are described. In one or more implementations, global semantic and depth layouts are estimated of a scene of the image through machine learning by the one or more computing devices. Local semantic and depth layouts are also estimated for respective ones of a plurality of segments of the scene of the image through machine learning by the one or more computing devices. The estimated global semantic and depth layouts are merged with the local semantic and depth layouts by the one or more computing devices to semantically label and assign a depth value to individual pixels in the image.

Description

technical field [0001] This application generally relates to joint depth estimation and semantic annotation for a single image. Background technique [0002] Depth estimation in images is generally used to estimate the distance between objects in the image scene and the camera used to capture the image. This is conventionally performed using stereo images or dedicated depth sensors (eg, time-of-flight or structured light cameras) to identify objects, support gestures, etc. Thus, this reliance on dedicated hardware, such as stereo cameras or dedicated depth sensors, limits the usability of these conventional techniques. [0003] Semantic annotation in images is used to assign labels to pixels in the image to eg describe objects at least partially represented by the pixels, such as sky, ground, buildings, etc. This can be used to support functions such as object removal and replacement in images, masking, segmentation techniques, etc. However, conventional ways to perform s...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/00
CPCG06T2207/20161G06T2207/20081G06T7/50G06N3/08G06N20/10G06V20/10G06V10/454G06V20/70G06V10/82G06V30/19173G06N7/01G06N3/045G06T7/11G06F18/24137
Inventor 林哲S·科恩王鹏沈晓辉B·普赖斯
Owner ADOBE SYST INC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products