Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Semantic segmentation method in autonomous driving scenarios based on bisenet

A semantic segmentation and automatic driving technology, applied in scene recognition, combustion engine, internal combustion piston engine, etc., can solve the problem of time-consuming, and achieve the effect of small model size, good convergence and high accuracy

Active Publication Date: 2022-08-09
FUZHOU UNIV
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The disadvantage of deep learning is that it requires a large amount of labeled data, so it takes a lot of time, but the flaws are not hidden

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Semantic segmentation method in autonomous driving scenarios based on bisenet
  • Semantic segmentation method in autonomous driving scenarios based on bisenet
  • Semantic segmentation method in autonomous driving scenarios based on bisenet

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0046] The present invention will be further described below with reference to the accompanying drawings and embodiments.

[0047] Please refer to figure 1 , the present invention provides a kind of semantic segmentation method under the automatic driving scene based on BiSeNet, comprises the following steps:

[0048] Step S1: collecting urban street image data and preprocessing;

[0049] Step S2: label the preprocessed image data to obtain labelled image data;

[0050] Step S3: performing data enhancement on the marked image data, and using the enhanced image data as a training set;

[0051] Step S4: build a BiSeNet neural network model, and train the model based on the training set;

[0052] Step S5: Preprocess the video information collected by the camera, and perform semantic segmentation on the city streets in the camera according to the trained BiSeNet neural network model.

[0053] Further, the step S1 is specifically:

[0054] Step S11: analyze the categories that...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention relates to a semantic segmentation method based on BiSeNet in an automatic driving scene, comprising the following steps: step S1: collecting urban street image data, and preprocessing; step S2: labeling the preprocessed image data, and obtaining the labeling Step S3: perform data enhancement on the marked image data, and use the enhanced image data as a training set; Step S4: build a BiSeNet neural network model, and train the model based on the training set; Step S5: collect the camera The video information is preprocessed, and the city streets in the camera are semantically segmented according to the trained BiSeNet neural network model. The invention can effectively improve the safety of automatic driving and the accuracy and speed of road scene segmentation.

Description

technical field [0001] The invention relates to the fields of pattern recognition and computer vision, and in particular to a BiSeNet-based semantic segmentation method in an automatic driving scene. Background technique [0002] Image semantic segmentation is an essential part of modern autonomous driving systems, as accurate understanding of the scene surrounding the car is key for navigation and action planning. Semantic segmentation can help autonomous vehicles identify drivable areas in an image. Since the emergence of Fully Convolutional Networks (FCN, Fully Convolutional Networks), convolutional neural networks have gradually become the mainstream method for processing semantic segmentation tasks, many of which are directly borrowed from convolutional neural network methods in other fields. In the past ten years, many scholars have made a lot of efforts in the creation of semantic segmentation datasets and the improvement of algorithms. Thanks to the development of ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06V10/26G06V10/774G06V20/10G06V20/58G06K9/62G06N3/04
CPCG06V20/40G06V20/56G06V10/267G06N3/045G06F18/214Y02T10/40
Inventor 柯逍蒋培龙黄艳艳
Owner FUZHOU UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products