Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Road scene semantic segmentation method based on multi-model fusion

A semantic segmentation and multi-model technology, applied in the field of computer vision, can solve problems such as poor connectivity of segmentation results and insufficient accuracy of road category segmentation, so as to improve recognition accuracy and robustness, increase computing speed, improve accuracy and edge The effect of precision

Pending Publication Date: 2022-07-01
NANJING UNIV OF AERONAUTICS & ASTRONAUTICS
View PDF0 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Benefits of technology

This patented technology combines multiple types of data from various sources like sensors into one more powerful tool called an integrated system. It achieves this through advanced techniques such as machine learning algorithms, deep learning methods, and generative adversarial networks. By doing these things together, it becomes able to identify patterns within complex datasets accurately even when there are many similarities between them. Overall, this innovations improve both vehicle safety and efficiency during driving operations.

Problems solved by technology

This patents discusses how traditional semisectoric or region specific techniques like FC layers limit their ability to recognize large areas accurately due to limitations with full connection type network architectures used by these models. Additionally, current approaches require that each frame from one aspect separately analyze every single section within it before transferring its content towards another part of the system.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Road scene semantic segmentation method based on multi-model fusion
  • Road scene semantic segmentation method based on multi-model fusion
  • Road scene semantic segmentation method based on multi-model fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0077] In order to facilitate the understanding of those skilled in the art, the present invention will be further described below with reference to the embodiments and the accompanying drawings, and the contents mentioned in the embodiments are not intended to limit the present invention.

[0078] refer to figure 1 As shown, a method for semantic segmentation of road scenes based on multi-model fusion of the present invention, the steps are as follows:

[0079] 1) Build a multi-class model and a two-class model; specifically include:

[0080] 11) Building a multi-classification model based on an improved high-resolution network (HRNet); introducing visual attention, the multi-classification model outputs pixel-level label images, and predicts the category of pixels;

[0081] 12) Build a two-class model based on the codec structure of DeepLabV3+; the two-class model outputs the prediction result of the road category.

[0082] Wherein, the step 11) specifically includes:

[...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a road scene semantic segmentation method based on multi-model fusion. The method comprises the following steps: building a multi-classification model and a binary-classification model; respectively carrying out end-to-end training on the multi-classification model and the two-classification model to obtain an optimal weight value enabling a loss function to be minimum; performing multi-classification prediction and dichotomy prediction on the road scene image by using the optimal weight value to form a preliminary segmentation result image; performing image post-processing on a preliminary segmentation result image formed by binary classification prediction; and fusing the preliminary segmentation result image formed by multi-classification prediction and the segmentation result image after image processing. According to the multi-classification model, visual attention is added to a feature fusion part on the basis of an original HRNet, so that effective feature maps obtain larger fusion weights, invalid or poor-effect feature maps obtain smaller fusion weights, the pixel representation capability of the multi-classification model is improved, and a better segmentation result is obtained.

Description

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Owner NANJING UNIV OF AERONAUTICS & ASTRONAUTICS
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products