Large-scene point cloud semantic segmentation method

A semantic segmentation and scene point technology, applied in the field of computer vision, can solve problems such as feature noise and redundancy of point cloud at the encoding layer, loss of key information, and impact on semantic segmentation performance

Active Publication Date: 2021-05-18
SICHUAN UNIV
View PDF11 Cites 10 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

First, since a large number of points are randomly lost during feature propagation across layers, key information loss is unavoidable
The existing method of compensating for information by expanding the receptive field cannot fundamentally solve this problem because it cannot effectively perceive the loss of cross-layer information
Second, due to the loss of key

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Large-scene point cloud semantic segmentation method
  • Large-scene point cloud semantic segmentation method
  • Large-scene point cloud semantic segmentation method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0063] In order to facilitate the understanding of those skilled in the art, the present invention will be further described below in conjunction with the embodiments and accompanying drawings, and the contents mentioned in the embodiments are not intended to limit the present invention.

[0064] refer to figure 1 , Figure 7 As shown, a large scene point cloud semantic segmentation method includes the following steps:

[0065] S10: Perform feature stitching on the 3D point cloud data containing feature information to obtain the initial features of the point cloud

[0066] The feature information of the 3D point cloud data mainly includes 3D coordinate information and RGB color information. First, the feature information of the 3D point cloud data is spliced ​​to obtain the splicing features, and then the splicing features are fused through the convolutional layer or the fully connected layer to obtain the preset Initial features of the point cloud in the output dimension. ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a large-scene point cloud semantic segmentation method. The method comprises the following steps: carrying out feature splicing on three-dimensional point cloud data containing feature information to obtain point cloud initial features; performing expansion graph convolution and random sampling on the point cloud initial features to obtain multi-layer intermediate features and sampling coding features; performing cross-layer context reasoning on the multi-layer intermediate features to obtain complementary context features, and splicing the complementary context features into the sampling coding features obtained in the last layer to obtain final coding features; decoding the final coding feature to obtain a decoding feature; inputting the decoding feature into a full connection layer classifier to obtain segmentation result prediction; and constructing a loss function, training and optimizing the model, and storing model parameters. According to the method, cross-layer context reasoning is used for aggregating multiple layers of contexts in the coding stage, attention fusion is adopted for feature selection in the decoding stage, information loss can be effectively made up and feature redundancy can be effectively reduced while efficiency is guaranteed, and therefore the accuracy rate is improved.

Description

technical field [0001] The invention belongs to the technical field of computer vision, and specifically relates to an efficient and accurate semantic segmentation method for three-dimensional point clouds of large scenes using deep learning algorithms. Background technique [0002] Point cloud is one of the most basic representations of 3D scenes, which usually contains the coordinates and related features (such as color) of each point in 3D space. The task of point cloud semantic segmentation is to segment each point in the point cloud into the corresponding category through calculation and analysis. In the early days, due to the limited sensing distance, people's research mainly focused on indoor point clouds of small scenes. When processing this type of point cloud, the complete point cloud is usually divided into sub-blocks of fixed size and number of points, and on this basis, feature extraction and learning are performed on each sub-block. [0003] With the rapid de...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T7/10G06K9/62G06N3/04G06N3/08
CPCG06T7/10G06N3/08G06T2207/10028G06N3/048G06F18/2453Y02T10/40
Inventor 雷印杰金钊
Owner SICHUAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products