Road scene semantic segmentation method based on convolutional neural network

A convolutional neural network and semantic segmentation technology, applied in the field of semantic segmentation of road scenes, can solve problems such as insufficient sensitivity of image details, inaccurate segmentation results, troublesome training, etc.

Active Publication Date: 2019-04-16
ZHEJIANG UNIVERSITY OF SCIENCE AND TECHNOLOGY
View PDF5 Cites 10 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

But it has many shortcomings. First, the training is more troublesome. It needs to be trained three times to get FCN-8s; second, it is not sensitive enough to the details of the image. This is because in the decoding process, that is, the process of restoring the original image size, the input The label map (label image) of the sampling layer is too sparse, and the upsampling process is a simple deconvolution (deconvolution), so the obtained segmentation results are still not fine

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Road scene semantic segmentation method based on convolutional neural network
  • Road scene semantic segmentation method based on convolutional neural network
  • Road scene semantic segmentation method based on convolutional neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0060] The present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments.

[0061] A method for road scene semantic segmentation based on convolutional neural network proposed by the present invention, its overall realization block diagram is as follows figure 1 As shown, it includes two processes of training phase and testing phase;

[0062] The specific steps of the described training phase process are:

[0063] Step 1_1: Select Q original road scene images and the real semantic segmentation images corresponding to each original road scene image, and form a training set, and record the qth original road scene image in the training set as {I q (i,j)}, combine the training set with {I q (i, j)} corresponding to the real semantic segmentation image is denoted as Then, the existing one-hot encoding technology (one-hot) is used to process the real semantic segmentation images corresponding to each original road scene...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a road scene semantic segmentation method based on a convolutional neural network. The method comprises the steps: firstly building the convolutional neural network which comprises an input layer, a hidden layer, and an output layer, and the hidden layer is composed of 13 neural network blocks, 7 upsampling layers, and 8 cascade layers; inputting each original road scene image in the training set into a convolutional neural network for training to obtain 12 semantic segmentation prediction images corresponding to each original road scene image; calculating a loss function value between a set composed of 12 semantic segmentation prediction images corresponding to each original road scene image and a set composed of 12 single hot coding images processed by the corresponding real semantic segmentation image; obtaining an optimal weight vector and an optimal offset item of the convolutional neural network classification training model; inputting the road scene imageto be subjected to semantic segmentation into a convolutional neural network classification training model for prediction to obtain a corresponding predicted semantic segmentation image. The method has the advantage of high semantic segmentation precision.

Description

technical field [0001] The invention relates to a road scene semantic segmentation technology, in particular to a road scene semantic segmentation method based on a convolutional neural network. Background technique [0002] In recent years, advances in machines capable of performing computationally intensive tasks have allowed researchers to dig deeper into neural networks. Convolutional neural networks have achieved recent success in image classification, localization, and scene understanding. At present, due to the proliferation of tasks such as augmented reality and self-driving vehicles, many researchers have turned their attention to scene understanding. One of the main steps is semantic segmentation, which is to classify each pixel in a given image. . Semantic segmentation is of great significance in mobile and robotics related applications. [0003] Of course, object detection methods can help draw bounding boxes for certain entities, but human scene understanding...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/34G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V20/38G06V20/20G06V20/56G06V10/267G06N3/045G06F18/24
Inventor 周武杰吕思嘉袁建中向坚王海江何成
Owner ZHEJIANG UNIVERSITY OF SCIENCE AND TECHNOLOGY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products