Road scene segmentation method based on residual network and expanded convolution

A scene segmentation and road technology, applied in character and pattern recognition, instruments, computer parts, etc., can solve problems that affect the accuracy of segmentation, cannot retrieve information, and do not control information loss well

Inactive Publication Date: 2019-04-16
ZHEJIANG UNIVERSITY OF SCIENCE AND TECHNOLOGY
View PDF0 Cites 21 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, one of the shortcomings of FCN is that due to the existence of the pooling layer, the size of the response tensor (length and width) is getting smaller and smaller. However, the original design of FCN requires an output that is consistent with the input size, so FCN does upsampling, but upsampling cannot retrieve

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Road scene segmentation method based on residual network and expanded convolution
  • Road scene segmentation method based on residual network and expanded convolution
  • Road scene segmentation method based on residual network and expanded convolution

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0041] The present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments.

[0042] A road scene segmentation method based on residual network and dilated convolution proposed by the present invention, its overall realization block diagram is as follows figure 1 As shown, it includes two processes of training phase and testing phase.

[0043] The specific steps of the described training phase process are:

[0044] Step 1_1: Select Q original road scene images and the real semantic segmentation images corresponding to each original road scene image, and form a training set, and record the qth original road scene image in the training set as {I q (i,j)}, combine the training set with {I q (i, j)} corresponding to the real semantic segmentation image is denoted as Then, the existing one-hot encoding technology (one-hot) is used to process the real semantic segmentation images corresponding to each original road scene ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a road scene segmentation method based on a residual network and expanded convolution. The method comprises: a convolutional neural network being constructed in a training stage, and a hidden layer of the convolutional neural network being composed of ten Respondial blocks which are arranged in sequence; inputting each original road scene image in the training set into a convolutional neural network for training to obtain 12 semantic segmentation prediction images corresponding to each original road scene image; calculating a loss function value between a set formed by12 semantic segmentation prediction images corresponding to each original road scene image and a set formed by 12 independent thermal coding images processed by a corresponding real semantic segmentation image to obtain an optimal weight vector of the convolutional neural network classification training model. In the test stage, prediction is carried out by utilizing the optimal weight vector of the convolutional neural network classification training model, and a predicted semantic segmentation image corresponding to the road scene image to be subjected to semantic segmentation is obtained. The method has the advantages of low calculation complexity, high segmentation efficiency, high segmentation precision and good robustness.

Description

technical field [0001] The present invention relates to a deep learning semantic segmentation technology, in particular to a road scene segmentation method based on residual network and dilated convolution. Background technique [0002] Deep learning is a branch of artificial neural network, and artificial neural network with deep network structure is the earliest network model of deep learning. Initially, the application of deep learning was mainly in the field of image and speech. Since 2006, deep learning has continued to heat up in academia. Deep learning and neural networks have been widely used in semantic segmentation, computer vision, speech recognition, and tracking. Its high efficiency also makes it suitable for real-time applications, etc. It has huge potential in all aspects. [0003] Convolutional neural networks have achieved success in image classification, localization, and scene understanding. With the proliferation of tasks such as augmented reality and ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/00G06K9/62
CPCG06V20/38G06V20/20G06V20/56G06F18/24
Inventor 周武杰吕思嘉袁建中向坚王海江何成
Owner ZHEJIANG UNIVERSITY OF SCIENCE AND TECHNOLOGY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products