Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Traffic scene semantic segmentation method based on boundary-guided context aggregation

A traffic scene and semantic segmentation technology, applied in the field of image processing, can solve the problems of difficult semantic segmentation tasks and wrong boundary estimation, and achieve the effect of improving segmentation performance and strong robustness.

Pending Publication Date: 2022-07-22
CENT SOUTH UNIV
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Since shallow features contain not only boundary information but also texture noise inside objects, this will have a negative impact on semantic segmentation
There are also some works that use boundary information to refine the predicted results. Since the two tasks of semantic segmentation and image boundary segmentation are not orthogonal to each other, wrong boundary estimation may bring difficulties to the semantic segmentation task.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Traffic scene semantic segmentation method based on boundary-guided context aggregation
  • Traffic scene semantic segmentation method based on boundary-guided context aggregation
  • Traffic scene semantic segmentation method based on boundary-guided context aggregation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0054] figure 1 Shown is a flowchart of a traffic scene semantic segmentation method for boundary-guided context aggregation according to an embodiment of the present invention, and the specific steps are as follows:

[0055] Step 1, obtain a traffic scene image.

[0056] Obtain the public dataset of traffic scenes and the corresponding segmentation labels.

[0057] Step 2, perform data processing on the traffic scene image.

[0058] (2-a) Synchronously flip the image in the original sample data and the corresponding segmentation label horizontally;

[0059] (2-b) Scale the image obtained in step (2-a) and the corresponding segmentation label to m 1 ×m 2 pixel size, where m 1 and m 2 are the width and height of the zoomed image, respectively, in this embodiment, m is preferred 1 is 769, m 2 is 769;

[0060] (2-c) Normalize the image obtained by scaling in step (2-b) and the corresponding segmentation label to form a processed sample data set.

[0061] Step 3, build a...

Embodiment 2

[0091] The method in Example 1 is used to conduct a traffic scene image semantic segmentation experiment on the public data set. There are 19 categories in the dataset, namely road, sidewalk, building, wall, fence, pole, traffic light, traffic Signal (traffic sign), vegetation (vegetation), terrain (terrain), sky (sky), pedestrian (person), rider (rider), car (car), truck (truck), bus (bus), train (train) ), motorcycles and bicycles. The experimental operating system is Linux, which is implemented based on the PyTorch1.6.0 framework of CUDA10.0 and cuDNN7.6.0, and uses 4 pieces of NVIDIA GeForce RTX 2080Ti (11GB) hardware.

[0092] This embodiment uses the intersection-over-union (IoU) index to compare the six methods of RefineNet, PSPNet, AAF, PSANet, AttaNet, DenseASPP and the present invention on the test set. The average result of this index on all categories is expressed by mIoU, Calculated as follows:

[0093]

[0094] K+1 represents the total number of categories ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a traffic scene semantic segmentation method based on boundary-guided context aggregation. According to the implementation scheme, the traffic scene semantic segmentation method comprises the steps of 1) obtaining a data set and a segmentation label; 2) data processing; 3) constructing a segmentation model; 4) constructing a loss function; 5) training a segmentation model; and 6) segmenting the traffic scene image. According to the traffic scene segmentation model with the boundary refining module, high-level semantic boundary information is reserved while low-level contour texture information is removed, the boundary of an object can be effectively detected, context information is aggregated along the boundary of the object, the consistency of pixels of the same kind is enhanced, and the segmentation efficiency is improved. Therefore, the boundary information is effectively utilized to perform semantic segmentation on the traffic scene image. According to the method, the dependency relationship between the pixels in the boundary region and the pixels in the object can be captured, and the segmentation accuracy and robustness are effectively improved.

Description

technical field [0001] The invention belongs to the technical field of image processing, relates to automatic segmentation of traffic scene images, and can be used for automatic driving. Background technique [0002] The purpose of semantic segmentation is to assign a category label to each pixel in a given image, to achieve the classification of similar pixels, to provide rich detailed information of the image, and to have a wide range of application space and development prospects. For example, in autonomous driving scenarios, by segmenting the scene, algorithms can provide information about free space on the road, as well as information such as pedestrians and traffic signs near the vehicle. [0003] Existing segmentation methods mainly use convolution operations to expand the receptive field of view and capture the global context information. This method ignores the relationship between the interior of the object and the boundary, resulting in the loss of boundary inform...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06V20/70G06V10/20G06V10/44G06V10/82G06N3/04G06N3/08
CPCG06N3/08G06N3/048G06N3/045
Inventor 赵于前肖晓阳张帆阳春华桂卫华
Owner CENT SOUTH UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products