Indoor scene semantic segmentation method based on improved full convolutional neural network

A convolutional neural network, indoor scene technology, applied in the field of indoor scene semantic segmentation based on improved full convolutional neural network, can solve the problems of insufficient feature expression extraction and large amount of parameters.

Pending Publication Date: 2021-04-02
ZHEJIANG UNIVERSITY OF SCIENCE AND TECHNOLOGY
View PDF0 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] In terms of model application, based on the emergence and development of fully convolutional neural networks, the application of fully convolutional neural networks has achieved superior performance and segmentation results in

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Indoor scene semantic segmentation method based on improved full convolutional neural network
  • Indoor scene semantic segmentation method based on improved full convolutional neural network
  • Indoor scene semantic segmentation method based on improved full convolutional neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0053] The present invention will be described in further detail below in conjunction with the accompanying drawings and specific embodiments.

[0054] An indoor scene semantic segmentation method based on the improved fully convolutional neural network proposed by the present invention, its overall realization block diagram is as follows figure 1 As shown, it includes two processes of training phase and testing phase;

[0055] The specific steps of the described training phase process are:

[0056] Step 1_1: Select Q pairs of original indoor scene RGB color images and Depth depth map images and real semantic segmentation images corresponding to each pair of original indoor scene images, and form a training set, and use the qth pair of original indoor scene images in the training set denoted as {RGB q (i,j),Depth q (i,j)}, the training set {RGB q (i,j),Depth q (i, j)} and the corresponding real semantic segmentation image are denoted as Then use the existing one-hot enc...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an indoor scene semantic segmentation method based on an improved full convolutional neural network. The method comprises the steps of firstly constructing a convolutional neural network, wherein a hidden layer of the convolutional neural network comprises five neural network blocks, five feature re-extraction convolutional layer blocks, five partitioning attention convolutional blocks, twelve fusion layers and four up-sampling layers; inputting the original indoor scene image into a convolutional neural network for training to obtain a corresponding semantic segmentation prediction graph; calculating a loss function value between a set formed by semantic segmentation prediction images corresponding to original indoor scene images and a set formed by one-hot coded images processed by corresponding real semantic segmentation images to obtain an optimal weight vector and an offset term of a convolutional neural network classification training model; and inputtingan indoor scene image to be subjected to semantic segmentation into the trained convolutional neural network classification training model to obtain a predicted semantic segmentation image. The methodhas the advantage that the semantic segmentation efficiency and accuracy of the indoor scene image are improved.

Description

technical field [0001] The invention relates to a deep learning semantic segmentation method, in particular to an indoor scene semantic segmentation method based on an improved fully convolutional neural network. Background technique [0002] Semantic image segmentation is one of the most challenging tasks in computer vision and plays a key role in applications such as autonomous driving, medical image analysis, virtual reality, human-computer interaction, etc. The core purpose of semantic segmentation is to give a category label for each pixel in a picture, and determine which category the pixel belongs to. Since the dataset of semantic segmentation generally involves indoor scenes or outdoor scenes, there are many segmented objects, so in It is essentially a multi-classification problem. [0003] From the perspective of supervised learning, image semantic segmentation can be divided into three types: fully supervised, semi-supervised and unsupervised. However, from the pe...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T7/10G06K9/62G06N3/04G06N3/08G06T5/30
CPCG06T7/10G06T5/30G06N3/08G06T2207/10024G06T2207/20081G06T2207/20084G06T2207/20221G06N3/048G06N3/045G06F18/25
Inventor 周武杰岳雨纯雷景生强芳芳周扬邱薇薇何成王海江马骁郭翔
Owner ZHEJIANG UNIVERSITY OF SCIENCE AND TECHNOLOGY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products