Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

3D point cloud semantic segmentation method under bird's-eye view coding view angle

A semantic segmentation, bird's-eye view technology, applied in the field of computer vision, can solve the problem of high sparsity of point cloud scene understanding data, difficult to process large-scale point clouds in real time, and insufficient local feature robustness, etc. Effects of perceptual field, increased perceptual field, strong feature extraction and recognition ability

Inactive Publication Date: 2020-10-30
XI AN JIAOTONG UNIV
View PDF5 Cites 26 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0013] In order to solve the previous technical problems such as point cloud scene understanding is easily limited by high data sparsity, local features are not robust enough, and the system overhead is too high to make it difficult to process large-scale point clouds in real time, it provides a bird's-eye view coding perspective. The 3D point cloud semantic segmentation method, for the input large-scale point cloud data, can convert the three-dimensional information of the point cloud data into a feature map that can be directly processed by two-dimensional convolution through bird's-eye view encoding, and combined with end-to-end Fully convolutional network technology completes point cloud semantic segmentation task

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • 3D point cloud semantic segmentation method under bird's-eye view coding view angle
  • 3D point cloud semantic segmentation method under bird's-eye view coding view angle
  • 3D point cloud semantic segmentation method under bird's-eye view coding view angle

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0066] The present invention provides a 3D point cloud semantic segmentation method under the bird's-eye view encoding perspective, including the training of the network model and the operation steps of the model.

[0067] 1. Training network model

[0068] To train the 3D point cloud semantic segmentation network model under the coding perspective of the bird's-eye view, sufficient point cloud data is first required. Each frame of point cloud scene samples should contain XYZ, reflectivity, and semantic category information to which each point belongs. Take the SemanticKITTI outdoor lidar point cloud dataset as an example. A total of 15,000 frame scene point clouds are used as the training set, and 3,000 real point clouds are used as the verification set.

[0069] After obtaining enough point cloud data sets, firstly, each frame of point cloud needs to be coded from the perspective of the bird’s-eye view to become grid voxels under the bird’s-eye view, and then use the simpl...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a 3D point cloud semantic segmentation method under a bird's-eye view coding view angle. The method is applied to an input 3D point cloud. The method comprises: converting a voxel-based coding mode into a view angle of a bird's-eye view; extracting a feature of each voxel through a simplified Point Net network; converting the feature map into a feature map which can be directly processed by utilizing a 2D convolutional network; and processing the encoded feature map by using a full convolutional network structure composed of residual modules reconstructed through decomposition convolution and hole convolution, so that an end-to-end pixel-level semantic segmentation result is obtained, point cloud network semantic segmentation can be accelerated, and a point cloud segmentation task in a high-precision real-time large scene can be achieved under the condition that hardware is limited. The method can be directly used for tasks of robots, unmanned driving, disordered grabbing and the like, and due to the design of the method on a coding mode and a network structure, the system overhead is lower while high-precision point cloud semantic segmentation is achieved,and the method is more suitable for hardware-limited scenes of robots, unmanned driving and the like.

Description

technical field [0001] The invention belongs to the field of computer vision, and in particular relates to a 3D point cloud semantic segmentation method under the coding perspective of a bird's-eye view. Background technique [0002] In 2014, the R-CNN convolutional neural network was proposed, and the feature extraction method based on the convolutional neural network gradually replaced the original traditional manual feature extraction method that began to stagnate around 2010. The method of processing two-dimensional images based on convolutional neural networks has begun to dominate the development of computer vision technology. The key to its success lies in the effective extraction of image features by convolution operations and the accurate fitting of model parameters based on data-driven network model fitting methods. And the high robustness and scalability brought by the redundant structure of the deep network itself. These characteristics of the convolutional neur...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/10G06N3/08G06N3/04
CPCG06T7/10G06N3/084G06T2207/10028G06T2207/20081G06T2207/20084G06N3/045
Inventor 杨树明李述胜袁野王腾胡鹏宇
Owner XI AN JIAOTONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products