Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Semantic mapping and positioning method based on priori laser point cloud and depth map fusion

A technology of laser point cloud and semantic mapping, applied in 3D image processing, image enhancement, image analysis and other directions, can solve the problem that the sensor cannot meet the robustness and real-time performance, prevent the widespread use of localization and mapping technology, point cloud Frame matching failure and other problems, to achieve the effect of automatic initialization and motion recovery, improved mapping and positioning accuracy, and real-time high-precision operation

Active Publication Date: 2021-01-22
AEROSPACE INFORMATION RES INST CAS
View PDF15 Cites 17 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Currently, application scenarios are usually limited to small workspaces, but robust and real-time localization of unknown cameras in large indoor and outdoor scenes remains challenging and may prevent widespread use of localization and mapping techniques; moreover, Due to the complex and dynamic changes in the lighting conditions and texture features of large indoor and outdoor scenes, feature points are often lost or the matching between point cloud frames based on a single RGB video image or lidar point cloud is often lost. Therefore, in an unknown environment, a single sensor cannot meet the requirements. Robustness and Real-time Requirements for 3D Semantic Reconstruction and Localization of Large Scenes

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Semantic mapping and positioning method based on priori laser point cloud and depth map fusion
  • Semantic mapping and positioning method based on priori laser point cloud and depth map fusion
  • Semantic mapping and positioning method based on priori laser point cloud and depth map fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0046] The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some, not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

[0047] In order to make the above objects, features and advantages of the present invention more comprehensible, the present invention will be further described in detail below in conjunction with the accompanying drawings and specific embodiments.

[0048] refer to figure 1 As shown, this embodiment provides a semantic mapping and positioning method based on the fusion of prior laser point cloud and depth map, which specifically includes the following steps: ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a semantic mapping and positioning method based on priori laser point cloud and depth map fusion. The method comprises the steps of S1, collecting priori laser point cloud data; S2, acquiring a depth image and an RGB image, generating RGB-D point cloud based on the depth image, and initializing and registering priori laser point cloud and RGB-D point cloud; S3, camera poseconstraints are provided by the registered prior laser point cloud to perform camera pose correction; S4, a three-dimensional geometric point cloud map is created by adopting a front and back window optimization method; S5, geometric increment segmentation is carried out on the three-dimensional geometric point cloud map, object recognition and semantic segmentation are carried out on the RGB image, geometric increment segmentation and semantic segmentation results are fused, and a 3D geometric segmentation map of semantic enhanced geometric segmentation is obtained; and S6, semantic association and segmentation probability allocation updating is carried out on the object to complete construction of a semantic map. Accumulated errors of large-scale indoor mapping and positioning can be effectively eliminated, and precision and real-time performance are high.

Description

technical field [0001] The invention relates to the technical field of positioning and map construction, in particular to a semantic mapping and positioning method based on fusion of prior laser point clouds and depth maps. Background technique [0002] At present, the main technical difficulties hindering the integration of virtual reality and virtual reality of augmented reality in large indoor and outdoor scenes, unmanned driving and robot navigation and positioning are the dynamic tracking and recording of camera positions and poses in real scenes and the construction of 3D semantic maps. Semantic map refers to assigning corresponding attribute values ​​to traditional 3D point cloud maps (for example: ground, walls or buildings). The requirements for accuracy and robustness cannot be met, and the error of the inertial attitude sensor gradually increases with time drift. The real-time 3D mapping and camera position and attitude calculation using computer vision and 3D lid...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T15/04G06T7/80G06T7/12G06T7/13
CPCG06T15/04G06T7/85G06T7/12G06T7/13G06T2207/10012G06T2207/10032
Inventor 李京龚建华
Owner AEROSPACE INFORMATION RES INST CAS
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products