Image semantic annotation method based on super pixel segmentation

A technology of superpixel segmentation and semantic annotation, applied in the field of image semantic analysis based on superpixel segmentation and convolutional neural network, can solve the problem of image annotation that cannot be universally solved, and achieves improved accuracy and robustness. The effect of improving computational efficiency and simplifying complexity

Active Publication Date: 2016-10-12
ZHEJIANG UNIV
View PDF2 Cites 32 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

These methods reflect completely different research ideas, but no

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image semantic annotation method based on super pixel segmentation
  • Image semantic annotation method based on super pixel segmentation
  • Image semantic annotation method based on super pixel segmentation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0020] As shown in the figure, an image semantic annotation system based on superpixel segmentation, the semantic annotation system is divided into two parts: the first part is the superpixel block feature extraction part. The first part involves converting multi-level superpixel blocks into feature image blocks that can be input into the convolutional neural network for training, and for each superpixel block, it needs to be expanded with the geometric features of the superpixel, and requires A support vector machine is used to weight the features of the superpixel block. In the second part, the multi-level super-pixel features are integrated into the pixel level, and the pixel-level conditional random field model is established, and the reasoning is carried out through the idea of ​​​​great a posteriori margin, and the image can be obtained by solving the model Labeled results. The technical problem to be solved by the present invention is to provide an image semantic annot...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides an image semantic annotation method based on super pixel segmentation. The method comprises the steps that a feature block extracted based on image super-pixel segmentation is input into a convolution neural network; a feature vector trained by the convolution neural network is expanded and weighted; and finally, a conditional random field model is constructed for semantic class annotation prediction. According to the technical scheme of the invention, the method takes a super-pixel block as a research object, which simplifies the complexity of the feature block extracted based on image super-pixel segmentation, and improves the calculation efficiency of semantic annotation; in addition, the multi-layer super-pixel block is used for semantic analysis, and the annotation results are integrated; and the accuracy and the robustness of semantic annotation are improved.

Description

technical field [0001] The present invention relates to the image semantic annotation method, in particular to the technical field of image semantic analysis based on superpixel segmentation and convolutional neural network. Background technique [0002] At present, the application of robots has expanded from traditional industrial manufacturing to military, scientific exploration and even medical services. In these new application areas, robots often work in unstructured outdoor environments. Compared with the indoor environment with single information, the outdoor scene is more complex, changeable and layered, involves a wide variety of semantic information, and is easily affected by factors such as light and field of view. In addition, there is no step-by-step operation steps when the robot works, and only a little prior knowledge, so the perception and understanding of the outdoor environment become a necessary prerequisite for autonomous control such as environmental m...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/62
CPCG06F18/214G06F18/24
Inventor 刘勇刘晓峰
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products