Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

De-convolutional neural network-based scene semantic segmentation method

A neural network and semantic segmentation technology, applied to instruments, character and pattern recognition, computer components, etc., can solve problems such as object classification errors, rough edges of segmented objects, etc., and achieve the effect of overcoming inherent defects and improving scene segmentation accuracy

Active Publication Date: 2017-08-18
INST OF AUTOMATION CHINESE ACAD OF SCI
View PDF5 Cites 18 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

These algorithms basically use the most advanced full convolutional neural network for scene segmentation, but each neural unit of the full convolutional neural network has a large receptive field, which can easily cause the edges of the segmented objects to be very rough.
Secondly, the simplest superposition strategy is also adopted in the fusion of RGB and depth information, without considering the fact that the data of these two modalities play completely different roles in distinguishing different objects in different scenes, resulting in many semantic segmentations. object classification error

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • De-convolutional neural network-based scene semantic segmentation method
  • De-convolutional neural network-based scene semantic segmentation method
  • De-convolutional neural network-based scene semantic segmentation method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0024] Preferred embodiments of the present invention are described below with reference to the accompanying drawings. Those skilled in the art should understand that these embodiments are only used to explain the technical principles of the present invention, and are not intended to limit the protection scope of the present invention.

[0025] Such as figure 1 As shown, the scene semantic segmentation method based on the deconvolution neural network of an embodiment of the present invention comprises the following steps:

[0026] Step S1, using a fully convolutional neural network to extract low-resolution dense feature representations from scene images;

[0027] Step S2, using the locally sensitive deconvolutional neural network and the local affinity matrix of the picture to upsample and optimize the dense feature expression obtained in step S1 to obtain the score map of the picture, so as to achieve fine scene semantic segmentation.

[0028] Scene semantic segmentation ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a de-convolutional neural network-based scene semantic segmentation method. The method comprises the following steps of: S1, extracting intensive feature expression for a scene picture by using a full-convolutional neural network; and S2, carrying out up-sampling learning and object edge optimization on the intensive feature expression obtained in the step S1 through utilizing a locally sensitive de-convolutional neural network by means of a local affinity matrix of the picture, so as to obtain a score map of the picture and then realize refined scene semantic segmentation. Through the locally sensitive de-convolutional neural network, the sensitivity, to the local edge, of the full-convolutional neural network is strengthened by utilizing local bottom layer information, so that scene segmentation with higher precision is obtained.

Description

technical field [0001] The invention relates to the fields of pattern recognition, machine learning and computer vision, in particular to a scene semantic segmentation method based on a deconvolution neural network. Background technique [0002] With the rapid improvement of computer computing power, computer vision, artificial intelligence, machine perception and other fields are also developing rapidly. Scene semantic segmentation, as one of the basic problems in computer vision, has also been greatly developed. Scene semantic segmentation is to use computers to intelligently analyze images, and then determine the object category to which each pixel in the image belongs, such as floors, walls, people, chairs, etc. Traditional scene semantic segmentation algorithms generally only rely on RGB (red, green, and blue) images for segmentation, which are easily disturbed by light changes, object color changes, and noisy backgrounds. They are not robust in actual use and the accu...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/46
CPCG06V20/35G06V10/462G06V10/44
Inventor 黄凯奇赵鑫程衍华
Owner INST OF AUTOMATION CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products