Unlock instant, AI-driven research and patent intelligence for your innovation.

Super-pixel-level indoor scene semantic annotation method

A technology for indoor scene and semantic annotation, applied in character and pattern recognition, instruments, computer parts, etc., can solve the problem of high computing cost, and achieve the effect of avoiding huge computing cost.

Active Publication Date: 2019-08-06
BEIJING UNIV OF TECH
View PDF13 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0009] In order to overcome the defects of the prior art, the technical problem to be solved by the present invention is to provide a super-pixel-level indoor scene semantic labeling method, which can avoid the problem of huge computational cost of deep network applied to pixel-level indoor scene labeling, and can Make a deep network accept a collection of superpixels as input

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Super-pixel-level indoor scene semantic annotation method
  • Super-pixel-level indoor scene semantic annotation method
  • Super-pixel-level indoor scene semantic annotation method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0018] The present invention proposes a super-pixel deep network to perform super-pixel-level semantic annotation on RGB-D indoor scenes. First, the SLIC algorithm is used to perform superpixel segmentation on RGB-D indoor scene images. For each superpixel, its neighboring superpixels are searched according to certain rules, and the superpixels to be marked are recorded as core superpixels. The kernel descriptor features and geometric features (primary features) of the core superpixel and its neighborhood superpixels are used as the input of the superpixel depth network to learn the multimodal fusion features of the core superpixel and its neighborhood superpixels; based on the core superpixel The multimodal fusion features of its neighborhood superpixels learn the neighborhood context features of the core superpixels, and splicing with the multimodal fusion features of the core superpixels is used as the feature representation of superpixel classification to achieve superinte...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a super-pixel-level indoor scene semantic annotation method, which can avoid the problem that a deep network is high in calculation cost when being applied to pixel-level indoor scene annotation, and can enable the deep network to accept a super-pixel set as input. The super-pixel-level indoor scene semantic annotation method comprises the following steps: (1) carrying outsuper-pixel segmentation on an indoor scene color image by using a simple linear iterative clustering segmentation algorithm; (2) extracting super-pixel kernel descriptor characteristics (primary characteristics) from the super-pixels obtained in the step (1) by combining an indoor scene depth image; (3) constructing a neighborhood of the superpixel; (4) constructing a superpixel depth network Super Pixel Net, and learning superpixel multi-modal features; for to-be-labeled superpixels, in combination with multi-mode characteristics of the superpixels and neighborhood superpixels of the superpixels, performing the super-pixel-level semantic annotation to RGB-D image of indoor scene.

Description

technical field [0001] The invention relates to the technical fields of multimedia technology and computer graphics, in particular to a superpixel-level indoor scene semantic labeling method. Background technique [0002] As a necessary work in computer vision research, semantic annotation of indoor scenes has always been a research hotspot and difficulty in the field of image processing. Compared with outdoor scenes, indoor scenes have the following characteristics: 1. There are many types of objects; 2. The occlusion between objects is more serious; 3. The scene is very different; 4. The illumination is uneven; 5. The lack of discriminative features . Therefore, compared with outdoor scenes, indoor scenes are more difficult to label. Indoor scene semantic annotation is the core content of indoor scene understanding. It has a wide range of applications in service, fire protection and other fields, such as robot mobile positioning and environment interaction, and event det...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62
CPCG06V20/36G06F18/23213G06F18/241
Inventor 王立春陆建霖王少帆孔德慧李敬华
Owner BEIJING UNIV OF TECH