Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Non-structured environment point cloud semantic segmentation method based on cross-modal semantic enhancement

An unstructured, semantic segmentation technology, applied in the field of intelligent vehicle environment perception, can solve the problems of limited adaptability of three-dimensional structural information, single depth and geometric information, difficulty in distinguishing low-resolution or similar geometric features, etc., to reduce Indexing and calculation time, ensuring accuracy and real-time performance, and improving the effect of adaptability

Pending Publication Date: 2022-05-27
SOUTHEAST UNIV
View PDF0 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The camera-based semantic segmentation method for unstructured environments uses color or texture features that are susceptible to interference from light and weather, and the lack of 3D structural information also limits the adaptability of such algorithms in different scenarios
The semantic segmentation algorithm based on lidar, due to the sparsity, disorder, uneven distribution of point cloud data, and the depth and geometric information it relies on is too simple, it is difficult to distinguish objects with low resolution or similar geometric features.
However, methods based on image and radar fusion (input-level, feature-level, and decision-level fusion) are highly dependent on the quality of each input signal or existing prior knowledge, and cannot be applied to complex and changeable unstructured scenes.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Non-structured environment point cloud semantic segmentation method based on cross-modal semantic enhancement
  • Non-structured environment point cloud semantic segmentation method based on cross-modal semantic enhancement
  • Non-structured environment point cloud semantic segmentation method based on cross-modal semantic enhancement

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0078] The present invention will be described in further detail below in conjunction with the accompanying drawings and specific embodiments:

[0079] 3D scene understanding is a key technology in the field of ground unmanned systems and a prerequisite for safe and reliable passage in structured and unstructured environments. At present, the more mature technologies are mainly designed for urban structured environments, and there are few studies on unstructured environments (such as emergency rescue scenarios), and the technologies are not yet mature. In an unstructured environment, there are no structural features such as lanes, road surfaces, and guardrails, and the drivable area has blurred boundaries and diverse textures. Therefore, existing algorithms designed for structured environments are difficult to apply directly to unstructured environments.

[0080] At present, deep learning-based semantic segmentation tasks mostly use cameras and lidars as their main sensor dat...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides an unstructured environment point cloud semantic segmentation method based on cross-modal semantic enhancement, and aims to solve the problems that most of current point cloud segmentation algorithms lack semantic information such as image texture colors and the like and are difficult to meet accuracy and real-time requirements at the same time in an unstructured environment. An unstructured environment semantic segmentation network based on deep fusion of images and laser radars is constructed, and the method comprises the following steps: firstly, designing a point cloud segmentation module based on spherical projection, and secondly, designing an image segmentation module based on residual cross-layer connection; then designing a GAN-based two-dimensional pseudo-semantic enhancement module to make up for semantic information such as color texture lacked by the point cloud, and finally training the network by using the sample set to obtain network parameters, thereby realizing efficient and reliable segmentation of the three-dimensional point cloud semantics of the unstructured environment.

Description

technical field [0001] The invention relates to the technical field of intelligent vehicle environment perception, in particular to an unstructured environment point cloud semantic segmentation method based on cross-modal semantic enhancement. Background technique [0002] 3D scene understanding is a key technology in the field of ground unmanned systems and a prerequisite for safe and reliable passage in structured and unstructured environments. At present, the more mature technologies are mainly designed for urban structured environments, and there are few studies on unstructured environments (such as emergency rescue scenarios), and the technologies are not yet mature. In an unstructured environment, there are no structural features such as lanes, pavements, and guardrails, and the drivable area has blurred boundaries and diverse textures; at the same time, affected by terrain, shrubs and other vegetation, the features of obstacles are complex and changeable, and there ar...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/10G06T7/521G06V10/26G06V10/44G06V10/764G06V10/82G06K9/62G06N3/04G06N3/08
CPCG06T7/10G06T7/521G06N3/08G06T2207/10028G06T2207/20076G06T2207/20081G06T2207/20084G06N3/047G06N3/048G06N3/045G06F18/2415G06F18/241
Inventor 李旭倪培洲徐启敏祝雪芬
Owner SOUTHEAST UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products