Laser radar point cloud multi-target ground object identification method based on deep learning

A ground object recognition and lidar technology, applied in the field of point cloud recognition, can solve the problems of difficulty in feature extraction, large amount of calculation, low recognition accuracy, etc., and achieve the effect of reducing the amount of calculation, reducing the input dimension, and reducing the amount of calculation.

Inactive Publication Date: 2019-11-05
UNIV OF ELECTRONICS SCI & TECH OF CHINA
View PDF9 Cites 23 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The object of the present invention is: the present invention provides a method for recognizing multi-target objects in laser radar point clouds based on deep learning, which solves the p

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Laser radar point cloud multi-target ground object identification method based on deep learning
  • Laser radar point cloud multi-target ground object identification method based on deep learning
  • Laser radar point cloud multi-target ground object identification method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0044] A deep learning-based laser radar point cloud multi-target object recognition method includes the following steps:

[0045] a: Establish a data set: After the point cloud scene is segmented into regions, feature representations, and labels in sequence, point cloud data including several three-dimensional spaces are obtained. The point cloud data includes training sets and test sets, such as figure 2 Shown:

[0046] a1: Region segmentation: First calculate the minimum value of the XYZ coordinates in the point cloud scene, and then make a difference between each coordinate point and the minimum value of each coordinate axis to complete the translation of all coordinate points, and then each coordinate axis The area is divided into a unit of 100 meters, and divided into multiple small areas with a size of 100*100*100. For each small area, a number of three-dimensional spaces with a size of A*A*A are further subdivided with the parameter space size A; in this way, The ent...

Embodiment 2

[0071] A deep learning-based laser radar point cloud multi-target object recognition method includes the following steps:

[0072] Create a dataset:

[0073] Region segmentation: Find the minimum and maximum values ​​of the x coordinates, the minimum and maximum values ​​of the y coordinates and the minimum and maximum values ​​of the z coordinates in the point cloud data, and subtract all the points from the minimum values ​​of the corresponding coordinates to make the point cloud The data is in the range of 0-(max-min), which is convenient for subsequent data processing and improves the efficiency of program operation. Divide all point cloud data from top to bottom and from left to right into cubes of 100m*100m*100, the total number of regions is M*N*K (where M=int(max.x) / 100+1; N =int(max.y) / 100+1; K=int(max.z) / 100+1), traverse all the point cloud data in sequence and divide by 100 to round into the corresponding area, and traverse all Area, save the area where the point ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a laser radar point cloud multi-target ground object identification method based on deep learning, and relates to the field of point cloud identification methods. The method comprises the following steps: sequentially carrying out region segmentation, feature representation and label marking on a point cloud scene to obtain point cloud data comprising a plurality of three-dimensional spaces; establishing a network model comprising an input layer, N convolution layers, a full connection layer and a Softmax function, inputting a test set in the data set, training the model to obtain an optimal model, and inputting the test set in the data set into the optimal model to obtain an identification result; searching suspected misclassification points according to the depthinformation, the high-level difference, the spatial relationship of the power tower beside the power line and the relationship between the adjacent three-dimensional spaces, and classifying again to obtain a final identification result. According to the method, the problems of large calculation amount, difficult feature extraction and low recognition accuracy of the existing neural network due tomassive, sparse and disordered point clouds are solved.

Description

[0001] This research was supported by the Science and Technology Program of Sichuan Province (No: 2019YFS0487). technical field [0002] The invention relates to the field of point cloud recognition methods, in particular to a deep learning-based laser radar point cloud multi-target feature recognition method. Background technique [0003] The object recognition of 3D point cloud is the process of identifying and extracting artificial and natural objects (roads, houses, trees, power lines, towers, etc.) , with the introduction and development of deep learning methods, it does not need to manually extract features, but automatically extracts object features by simulating the working structure of human brain neurons, overcoming the shortcomings of traditional methods such as complex feature extraction and insufficient representation. , Su and Maji et al. used the projection images of 3D shapes at 12 different angles, each angle was learned by VGG-M convolutional neural network...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/62G06N3/04
CPCG06N3/045G06F18/24147G06F18/214
Inventor 邓建华余坤申睿涵孙一鸣周群芳钱璨王云何子远俞泉泉常为弘陈翔罗凌云魏傲寒俞婷肖正欣邓力恺王韬杨远望游长江
Owner UNIV OF ELECTRONICS SCI & TECH OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products