Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

3D point cloud semantic segmentation method based on position attention and auxiliary network

A semantic segmentation and auxiliary network technology, applied in the field of data processing, can solve the problems of weak feature representation, not considering more information at the bottom layer, and low segmentation accuracy, so as to achieve the effect of improving segmentation accuracy

Active Publication Date: 2019-10-11
XIDIAN UNIV
View PDF7 Cites 19 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] Compared with the traditional method, the above two methods directly process 3D point cloud data, the calculation is simple, effectively solve the characteristics of point cloud disorder and improve the segmentation accuracy, but PointNet++ does not consider the relationship between the characteristics of each center point. The relationship between them, that is, the context information, so the feature representation is relatively weak. In addition, PointNet++ complies with the general framework of encoding-decoding, and does not consider more information at the bottom layer. Therefore, the segmentation accuracy is not high, and there is still room for improvement.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • 3D point cloud semantic segmentation method based on position attention and auxiliary network
  • 3D point cloud semantic segmentation method based on position attention and auxiliary network
  • 3D point cloud semantic segmentation method based on position attention and auxiliary network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0021] The present invention will be further described in detail below in conjunction with the accompanying drawings and specific embodiments.

[0022] refer to figure 1 , the implementation steps of this example include the following.

[0023] Step 1, get training set T and test set V.

[0024] 1.1) Download the training file and test file of 3D point cloud data from ScanNet official website, where the training file contains f 0 point cloud scene, the test file contains f 1 point cloud scene, in this embodiment f 0 =1201, f 1 = 312;

[0025] 1.2) Use the histogram to count all f in the training file 0 The number of each category of the point cloud data of a scene, and calculate the weight w of each category k :

[0026]

[0027] Among them, G k Represent the number of the kth class point cloud data, M represents the number of all point cloud data, L represents the number of segmentation categories, L≥2, L=21 in the present embodiment;

[0028] 1.3) For each scene...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a 3D point cloud semantic segmentation method based on position attention and an auxiliary network, mainly solves the problem of low segmentation precision in the prior art, andadopts the implementation scheme which comprises the following steps of acquiring a training set T and a test set V; constructing a 3D point cloud semantic segmentation network, setting a loss function of the network wherein the network comprises a feature down-sampling network, a position attention module, a feature up-sampling network and an auxiliary network which are cascaded in sequence; performing P rounds of supervised training on the segmentation network by using the training set T: in each round of training process, adjusting network parameters according to a loss function, and afterthe P rounds of training are completed, taking the network model with the highest segmentation precision as a trained network model; and inputting the test set V into a trained network model for semantic segmentation to obtain a segmentation result of each point. The 3D point cloud semantic segmentation precision is improved, and the method can be used for automatic driving, robots, 3D scene reconstruction, quality detection, 3D drawing and smart city construction.

Description

technical field [0001] The invention belongs to the technical field of data processing, and in particular relates to a 3D point cloud semantic segmentation method, which can be used in automatic driving, robots, 3D scene reconstruction, quality inspection, 3D mapping and smart city construction. Background technique [0002] In recent years, with the wide application of 3D sensors such as lidar and RGBD cameras in the fields of robots and driverless vehicles, the application of deep learning in 3D point cloud data has become one of the research hotspots. 3D point cloud data refers to: a collection of a set of vectors in a three-dimensional coordinate system. These vectors are usually expressed in the form of x, y, and z three-dimensional coordinates, and are generally used to represent the outer surface shape of an object. In addition, in addition to the geometric information represented by (x, y, z), it may also contain information such as RGB color, intensity, gray value, ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/11G06N3/04G06N3/08
CPCG06T7/11G06N3/08G06T2207/10028G06N3/045
Inventor 焦李成冯志玺张格格杨淑媛程曦娜马清华张杰郭雨薇丁静怡唐旭
Owner XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products