Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A large scene point cloud semantic segmentation method

A semantic segmentation and scene point technology, applied in the field of computer vision, can solve problems such as inability to effectively perceive cross-layer information loss, feature noise and redundancy of coding layer point clouds, and affect semantic segmentation performance, achieving high semantic segmentation accuracy and reducing The effect of feature redundancy and strong applicability

Active Publication Date: 2022-07-12
SICHUAN UNIV
View PDF10 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

First, since a large number of points are randomly lost during feature propagation across layers, key information loss is unavoidable
The existing method of compensating for information by expanding the receptive field cannot fundamentally solve this problem because it cannot effectively perceive the loss of cross-layer information
Second, due to the loss of key information and the sparsity of large-scale point clouds, the point cloud features of the encoding layer will have noise and redundancy due to the aggregation of invalid information
Current methods usually directly stitch the encoding layer features into the decoding layer to restore the sampled point cloud, which will affect the semantic segmentation performance

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A large scene point cloud semantic segmentation method
  • A large scene point cloud semantic segmentation method
  • A large scene point cloud semantic segmentation method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0063] In order to facilitate the understanding of those skilled in the art, the present invention will be further described below with reference to the embodiments and the accompanying drawings, and the contents mentioned in the embodiments are not intended to limit the present invention.

[0064] see figure 1 , Figure 7 As shown, a large scene point cloud semantic segmentation method includes the following steps:

[0065] S10: Perform feature splicing on the 3D point cloud data containing feature information to obtain the initial features of the point cloud

[0066] The feature information of 3D point cloud data mainly includes 3D coordinate information and RGB color information. First, the feature information of 3D point cloud data is spliced ​​to obtain splicing features, and then the splicing features are fused through convolution layer or fully connected layer to obtain preset features. The initial features of the point cloud in the output dimension.

[0067] In this...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a large scene point cloud semantic segmentation method, which comprises the following steps: performing feature splicing on three-dimensional point cloud data containing feature information to obtain point cloud initial features; performing dilated graph convolution and random sampling on the point cloud initial features to obtain Multi-layer intermediate features and sample coding features; perform cross-layer context inference on the multi-layer intermediate features to obtain complementary context features, and splicing them into the sample coding features obtained by the last layer to obtain the final coding features; decode the final coding features to obtain decoding features ; Input the decoded features into the fully connected layer classifier to get the segmentation result prediction; build a loss function to train and optimize the model, and save the model parameters. The invention uses cross-layer context reasoning to aggregate multi-layer contexts in the coding stage, and uses attention fusion for feature selection in the decoding stage, which can effectively compensate for information loss and reduce feature redundancy while ensuring efficiency, thereby improving accuracy.

Description

technical field [0001] The invention belongs to the technical field of computer vision, and in particular relates to a method for efficient and accurate semantic segmentation of three-dimensional point clouds of large scenes by using a deep learning algorithm. Background technique [0002] A point cloud is one of the most basic representations of a 3D scene, which usually contains the coordinates and related features (such as color) of each point in the 3D space. The task of point cloud semantic segmentation is to segment each point in the point cloud into a corresponding category through calculation and analysis. In the early days, due to the limited sensing distance, researches mainly focused on indoor point clouds of small scenes. When processing this type of point cloud, the complete point cloud is usually divided into sub-blocks of fixed size and number of points, and on this basis, feature extraction and learning are performed for each sub-block. [0003] With the ra...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T7/10G06K9/62G06N3/04G06N3/08G06V10/764G06V10/82
CPCG06T7/10G06N3/08G06T2207/10028G06N3/048G06F18/2453Y02T10/40
Inventor 雷印杰金钊
Owner SICHUAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products