Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Point cloud feature space representation method for laser SLAM

A space table and point cloud technology, applied in the direction of instruments, character and pattern recognition, computer components, etc., can solve complex scene recognition problems, do not consider local geometric features and semantic context features, scene fast static assumptions cannot be satisfied, etc. question

Inactive Publication Date: 2020-10-16
SOUTHEAST UNIV
View PDF2 Cites 16 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Compared with cameras, lidar can overcome the influence of light, seasons, weather, etc., but there are many kinds of objects in laser point cloud scene data, including a large number of dynamic scenes. Even the same scene at different times will be affected by various external factors. There is a large difference, which makes the scene recognition problem very complicated, and the key to scene recognition is how to find a global description feature that can completely represent the structural information and semantic information of the same type of scene, especially for large-scale laser point clouds. feature representation method
[0004] Traditional point cloud-based environment recognition algorithms such as bag-of-words (BoW), FV (Fisher Vector), and VLAD usually rely on global, offline, and high-resolution maps, and need to train a codebook based on the map in advance to achieve high-precision positioning. Segmatch, based on deep learning, proposes a scene recognition method based on 3D segmentation matching. Local matching is performed through the segmented scene blocks, and the reliability of local matching and scene recognition is ensured through geometric consistency checks. However, for dynamic scene applications, the scene The fast static assumption does not satisfy
PointnetVLAD proposes a new aggregation method, which combines PointNet and NetVLAD. The former is used for feature learning, and the latter is used for feature aggregation, which can learn to obtain global description features. The PCAN network based on point cloud deep learning is also considered on the basis of PointNetVLAD The contribution degree of different local features is introduced, and the self-attention mechanism is introduced to learn the contribution weight of different local features through the network learning feature aggregation. However, the above methods also do not consider local geometric features and semantic context features, point cloud neighborhood relations and feature spaces. Distribution, the effect is not good in feature encoding of large-scale point cloud scene recognition

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Point cloud feature space representation method for laser SLAM
  • Point cloud feature space representation method for laser SLAM
  • Point cloud feature space representation method for laser SLAM

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0041] Such as figure 1 As shown, a point cloud feature space representation method for laser SLAM includes the following steps:

[0042] Step 1: Perform preprocessing such as filtering and downsampling on the point cloud samples in the point cloud dataset, and construct training point cloud sample pairs according to the similarity of the point cloud scene, including the positive sample p pos with negative samples p neg .

[0043] The similarity of the training sample pair is determined by the auxiliary judgment of the position coordinates of the samples in the map. The position interval within 10m is regarded as a positive sample with similar structure, and the distance between 10m and 50m is regarded as a negative sample selection range, and the negative sample is randomly selected. , the randomly selected samples at a distance of 50 meters are regarded as extremely dissimilar extremely negative samples.

[0044] Specifically: select the Oxford RobotCar dataset, use the p...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a point cloud feature space representation method for laser SLAM, and the method builds the bidirectional mapping of a point cloud space and a point cloud feature space based on a deep learning network, and achieves the closed-loop detection and repositioning in the laser SLAM and the map compression storage and transmission in the feature space. Global description featureextraction and compression reconstruction of the large-scale scene point cloud are realized through a neural network of an auto-encoder structure; an encoder network is designed to extract global description features to form a feature space of the point cloud, and similarity measurement of scenes is given according to the distance in the feature space and used for judging whether two or more scenestructures are similar or not, so that closed-loop detection and repositioning of laser SLAM are realized; the method includes reconstructing the point cloud through the designed decoder network, reconstructing the original point cloud from the global description features extracted from the encoder network, and realizing compressed storage and low-bandwidth transmission of the point cloud map; the constructed encoder network does not need to be trained in advance according to a point cloud map, and has strong generalization ability.

Description

technical field [0001] The invention relates to the technical field of laser point cloud mapping and autonomous navigation, in particular to a point cloud feature space representation method for laser SLAM. Background technique [0002] How to make mobile robots better understand and perceive the surrounding environment and achieve flexible and reliable high-level autonomous navigation, and the artificial intelligence technology promoted by deep learning has brought rapid development to mobile robot environment perception and autonomous navigation, especially LiDAR The application of the robot makes the robot directly perceive the three-dimensional environment, but it also brings challenges to the point cloud data processing. Simultaneous Localization And Mapping (SLAM) is one of the basic and key technologies for mobile robots to achieve autonomous navigation and positioning. It aims to establish a local map and simultaneously determine the position of the robot on the map ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62
CPCG06F18/22G06F18/214
Inventor 莫凌飞索传哲
Owner SOUTHEAST UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products