Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Indoor scene semantic annotation method based on RGB-D data

A technology for semantic labeling and indoor scene, applied in the field of image semantic labeling and indoor scene semantic labeling based on RGB-D data, which can solve the problem of single depth information, difficulty in selecting the quantization level of labeling primitives, and insufficient geometric depth information. attention etc.

Active Publication Date: 2015-07-29
NANJING UNIV OF POSTS & TELECOMM
View PDF7 Cites 48 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, most current semantic annotation works only use depth information to construct region-level features, but ignore its role in context inference, and the depth information used is relatively simple.
[0007] To sum up, the existing semantic annotation schemes for indoor scenes generally have the problem that it is difficult to select the quantization level of the annotation primitives, and the role of geometric depth information in the context reasoning process has not received enough attention.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Indoor scene semantic annotation method based on RGB-D data
  • Indoor scene semantic annotation method based on RGB-D data
  • Indoor scene semantic annotation method based on RGB-D data

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0053] The specific implementation manners of the present invention will be further described in detail below in conjunction with the accompanying drawings.

[0054] Such as figure 1 As shown, the present invention designs an indoor scene semantic annotation method based on RGB-D data. In the actual application process, the semantic annotation framework of the indoor scene image is carried out by using the semantic annotation framework based on RGB-D information from coarse to fine and global recursive feedback. , which is characterized in that: the semantic annotation framework is composed of coarse-grained region-level semantic label inference and fine-grained pixel-level semantic label refinement, which are alternately iteratively updated, including the following steps:

[0055] Step 001. Using the simple linear iterative clustering (SLIC) over-segmentation algorithm based on image hierarchical saliency guidance, over-segment the RGB image in the RGB-D training data set, ob...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to an indoor scene semantic annotation method based on RGB-D data. According to the method, a coarse-to-fine global recursion feedback semantic annotation framework based on the RGB-D data is built, in addition, the whole semantic annotation framework is divided into two major parts including the coarse-granularity region stage semantic label deduction and fine-granularity pixel stage semantic label refinement. The framework is different from the traditional region stage or pixel stage semantic annotation framework, the framework rebuilds the relationship between the coarse-granularity region stage semantic label deduction and the fine-granularity pixel stage semantic annotation, and a reasonable global recursion feedback mechanism is introduced, so that the coarse-granularity region stage semantic annotation result and the fine-granularity pixel level semantic annotation result realize the alternate iterative updating optimization. Through adopting the mode, the multi-mode information of different region layers in the scene images is better merged, and the general problem that an annotation base element is difficult to be properly selected in the traditional indoor scene semantic annotation scheme is solved to a certain degree.

Description

technical field [0001] The invention relates to an image semantic labeling method, in particular to an indoor scene semantic labeling method based on RGB-D data, and belongs to the technical field of semantic label classification of computer vision. Background technique [0002] Image semantic annotation is the core unit of scene understanding work in computer vision, and its basic goal is to densely provide a predefined semantic category label for each pixel in a given query image. Considering the ambiguity, complexity and abstraction of image semantics, the image semantic models generally established are hierarchical. Among them, "target semantics" is at the middle level of the semantic hierarchy, and plays a linking role in many high-level semantic reasoning processes. According to the quantitative level of annotation primitives in image semantic annotation, most current image semantic annotation schemes can be roughly divided into two categories, including: pixel-level ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F17/30G06K9/62
Inventor 冯希龙刘天亮
Owner NANJING UNIV OF POSTS & TELECOMM
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products