Sparse point cloud segmentation method and device

A sparse point and point cloud technology, applied in the field of image processing, can solve the problems of expensive hardware, segmentation accuracy and low efficiency, and achieve the effect of improving accuracy and efficiency, reducing equipment cost, and good practical application value

Active Publication Date: 2019-09-20
SHENZHEN UNIV
View PDF6 Cites 51 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, due to the expensive hardware required for this lidar combination method, and the point cloud segmentation directly in the original point cloud is a very difficult problem, the accuracy and efficiency of the segmentation are relatively low

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Sparse point cloud segmentation method and device
  • Sparse point cloud segmentation method and device
  • Sparse point cloud segmentation method and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0050] This embodiment is different from traditional point cloud segmentation methods and existing point cloud segmentation methods that directly apply deep learning. Traditional point cloud segmentation methods use purely mathematical models and geometric inference techniques, such as region growing or model fitting, combined with robust estimators to fit linear and nonlinear models to point cloud data. This method of point cloud segmentation is faster and can achieve good segmentation results in simple scenes, but the limitation of this method is that it is difficult to choose the size of the model when fitting objects, and it is sensitive to noise and in complex scenes doesn't work well.

[0051] Existing methods that directly apply deep learning for point cloud segmentation use feature descriptors to extract 3D features from point cloud data, and use machine learning techniques to learn different categories of object types, and then use the resulting model to classify the ...

Embodiment 2

[0099] Such as Figure 4 As shown, it is a structural block diagram of a sparse point cloud segmentation device of this embodiment, including:

[0100] Obtain image data module 10, be used for obtaining the target two-dimensional image data of camera shooting and target three-dimensional point cloud data under laser radar;

[0101] A joint calibration module 20, configured to jointly calibrate the camera and the laser radar and generate calibration parameters;

[0102] The target detection module 30 is used to perform target detection on the target two-dimensional image data to obtain a target detection result, the target detection result including: target category and two-dimensional bounding box position coordinate information;

[0103] The three-dimensional cone point cloud generation module 40 is used to extract three-dimensional points that can be converted to the target two-dimensional bounding box according to the two-dimensional bounding box position coordinate inform...

Embodiment 3

[0106] This embodiment also provides a sparse point cloud segmentation device, including:

[0107] at least one processor, and a memory communicatively coupled to the at least one processor;

[0108] Wherein, the processor is used to execute the method described in Embodiment 1 by invoking the computer program stored in the memory.

[0109] In addition, the present invention also provides a computer-readable storage medium, where computer-executable instructions are stored in the computer-readable storage medium, where the computer-executable instructions are used to make a computer execute the method as described in Embodiment 1.

[0110] In the embodiment of the present invention, by acquiring the two-dimensional image data of the target captured by the camera and the three-dimensional point cloud data of the target under the laser radar, the camera and the laser radar are jointly calibrated to generate calibration parameters, and then the target two-dimensional image data i...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a sparse point cloud segmentation method and device. Relating to the field of image processing, the method comprises the following steps of: obtaining a target object, acquiring target two-dimensional image data shot by a camera and target three-dimensional point cloud data under the laser radar; performing joint calibration on the camera and the laser radar, generating calibration parameters, performing target detection on target two-dimensional image data, extracting three-dimensional points which can be converted to a target two-dimensional boundary frame according to a target detection result and a selection principle, generating three-dimensional cone point cloud containing target information, and finally performing point cloud segmentation to generate target point cloud. In the prior art, a laser radar combination fusion mode is used to carry out point cloud segmentation. According to the method, the equipment cost is reduced, the three-dimensional cone point cloud containing the target information is obtained according to the selection principle, then point cloud segmentation is carried out to remove the noise point cloud, the point cloud segmentation precision and efficiency are improved, and the method has a good practical application value.

Description

technical field [0001] The invention relates to the field of image processing, in particular to a sparse point cloud segmentation method and device. Background technique [0002] In recent years, as the application of 3D sensor devices has gradually become popular, for example, in autonomous navigation systems, it is necessary to continuously detect the position and category of target objects, and 3D point cloud segmentation is the key and essential in the environmental perception tasks of these automatic navigation systems step. Therefore, the segmentation of the 3D point cloud of the target has become a hot research direction of many researchers. However, in an unknown dynamic environment, due to the sparsity of point cloud data, uneven sampling density, irregular format and lack of color texture, it is difficult to perform accurate point cloud segmentation. [0003] At present, in order to improve the accuracy of point cloud segmentation, most of the lidars with high be...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T5/00G06T7/10G06T7/80
CPCG06T5/002G06T7/10G06T7/80G06T2207/10028
Inventor 田劲东李育胜田勇李东李晓宇
Owner SHENZHEN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products