Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Road Scene Semantic Segmentation Method Effectively Fusion of Neural Network Features

A neural network and semantic segmentation technology, applied in neural learning methods, biological neural network models, scene recognition, etc., can solve the problem that simple fusion of low-level and high-level features is not effective, so as to improve the accuracy of semantic segmentation and information loss The effect of improving and enhancing robustness

Active Publication Date: 2022-04-05
ZHEJIANG UNIVERSITY OF SCIENCE AND TECHNOLOGY
View PDF7 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

But due to the difference in semantic level and spatial resolution, simple fusion of low-level and high-level features may not be effective

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Road Scene Semantic Segmentation Method Effectively Fusion of Neural Network Features
  • A Road Scene Semantic Segmentation Method Effectively Fusion of Neural Network Features
  • A Road Scene Semantic Segmentation Method Effectively Fusion of Neural Network Features

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0042] The present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments.

[0043] The present invention proposes a road scene semantic segmentation method that effectively integrates neural network features, which includes two processes, a training phase and a testing phase.

[0044] The specific steps of the described training phase process are:

[0045] Step 1_1: Select Q original road scene images and the real semantic segmentation images corresponding to each original road scene image, and form a training set, and record the qth original road scene image in the training set as {I q (i,j)}, combine the training set with {I q (i, j)} corresponding to the real semantic segmentation image is denoted as Then, the existing one-hot encoding technology (one-hot) is used to process the real semantic segmentation images corresponding to each original road scene image in the training set into 12 one-hot encoded images. ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a road scene semantic segmentation method that effectively integrates neural network features. It constructs a convolutional neural network in the training stage, which includes an input layer, a hidden layer, and an output layer. The hidden layer includes three neural network blocks. Spatial feature extraction channel, background feature extraction channel composed of 5 neural network blocks, and feature fusion channel composed of the fifth fusion block; each original road scene image in the training set is input into the convolutional neural network for training , to obtain 12 semantic segmentation prediction maps corresponding to each original road scene image; by calculating the set of 12 semantic segmentation prediction maps corresponding to each original road scene image and processing the 12 corresponding real semantic segmentation images Convolutional neural network training model is obtained by one-hot encoding the loss function value between sets of images; in the test phase, the convolutional neural network training model is used for prediction; the advantage is high segmentation accuracy and strong robustness.

Description

technical field [0001] The invention relates to a semantic segmentation method, in particular to a road scene semantic segmentation method which effectively integrates neural network features. Background technique [0002] Semantic segmentation is a fundamental technique for many computer vision applications, such as scene understanding, autonomous driving, etc. With the development of Convolutional Neural Networks, especially Fully Convolutional Neural Networks (FCNs), many promising results have been achieved on benchmarks. The fully convolutional neural network has a typical encoder-decoder structure, semantic information is first embedded into the feature map through the encoder, and the decoder is responsible for generating segmentation results. Typically, the encoder extracts image features through a pre-trained convolutional model, and the decoder contains multiple upsampling components to recover resolution. Although the encoder's most important feature maps may be...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06V20/58G06V10/26G06V10/82G06N3/04G06N3/08
CPCG06N3/08G06V20/56G06N3/045
Inventor 周武杰朱家懿叶绿雷景生王海江何成
Owner ZHEJIANG UNIVERSITY OF SCIENCE AND TECHNOLOGY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products