Multi-modal three-dimensional point cloud segmentation system and method

A 3D point cloud and multi-modal technology, applied in 3D object recognition, character and pattern recognition, biological neural network models, etc., can solve the problem of being unable to directly process large-scale real point cloud scenes, susceptible to noise interference, and lack of information fusion and other issues to achieve good generalization, good robustness, and improved accuracy

Pending Publication Date: 2020-10-09
SOUTHEAST UNIV +1
View PDF0 Cites 11 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, there are still the following problems: 1) point cloud data is naturally sparse and susceptible to noise interference, and the learning of point clouds needs to ensure a certain degree of robustness; 2) the current point cloud processing related technologies can be applied to limited scenarios, due

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-modal three-dimensional point cloud segmentation system and method
  • Multi-modal three-dimensional point cloud segmentation system and method
  • Multi-modal three-dimensional point cloud segmentation system and method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0033] figure 1 It is a flow chart of multimodal 3D point cloud scene segmentation according to Embodiment 1 of the present invention, refer to below figure 1 , detail each step.

[0034] Step 1. Preprocess the collected data. According to the corresponding relationship between point cloud data and image pixels, back-project to obtain point cloud data with color information and spatial coordinates, and divide the whole scene into smaller area.

[0035] In this example the data is collected using a specific camera that combines 3 structured light sensors with different pitches to capture 18 RGB and depth images during a 360° rotation of each scanning position. Each 360° scan is performed in 60° increments, providing 6 sets of triple RGB-D data for each position. The output is a reconstructed 3D textured mesh of the scanned area, raw RGB-D image and camera metadata. Additional RGB-D data was generated based on this data, and a point cloud was made by sampling the mesh.

[0...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a multi-modal three-dimensional point cloud segmentation system and method. According to the invention, the good fusion of modal data can be realized; a priori mask is introduced, robustness of an obtained scene segmentation result is improved, and the high segmentation precision is obtained. For different scenes, such as toilets, meeting rooms and offices, a good prediction result can be obtained, and the model has good generalization. For an unused skeleton network used for extracting point cloud features, a feature and decision fusion module can be attempted to be utilized, and the precision is improved; if calculation conditions allow, more points can be tried, and a larger area can be utilized, for example, the number of used points and the size of a scene areaare increased by the same multiple, so that the receptive field of the whole model is improved, and the perceptual capacity of the model to the whole scene is improved.

Description

technical field [0001] The invention relates to the technical fields of computer vision and computer graphics, in particular to a multi-modal three-dimensional point cloud segmentation system and method. Background technique [0002] With the rapid development of 3D acquisition technology, related sensors are becoming more and more common in our lives, such as various 3D scanning devices, lidar and RGB-D cameras, etc. 3D point cloud data is used in many machine vision tasks, such as autonomous driving, robot navigation, virtual reality and augmented reality, etc. In addition, point cloud related technologies play an important role in medical image processing, computer graphics and other fields. For visual tasks, images are easily affected by ambient lighting and shooting angles, and the spatial structure information of objects is lost to a certain extent; while point cloud data can contain geometric information of specific scenes in 3D space, and are not easily affected by ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/00G06K9/34G06K9/46G06K9/62G06N3/04
CPCG06V20/64G06V10/267G06V10/56G06N3/045G06F18/253
Inventor 王雁刚杭天恺
Owner SOUTHEAST UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products