Wheeled robot semantic mapping method and system fusing point cloud and images

A technology that integrates points and point clouds, applied in character and pattern recognition, radio wave measurement systems, instruments, etc., can solve the problem of insufficient map information, and achieve the effect of low cost, wide application range, and rich semantic information

Active Publication Date: 2020-07-28
WUHAN UNIV
View PDF11 Cites 20 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] The present invention proposes a wheeled robot semantic mapping method and system that integrates point clouds and images, and is used to solve or at least partially solve the technical problem that the map information constructed by the method in the prior art is not rich enough

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Wheeled robot semantic mapping method and system fusing point cloud and images
  • Wheeled robot semantic mapping method and system fusing point cloud and images
  • Wheeled robot semantic mapping method and system fusing point cloud and images

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0049] This embodiment provides a method for semantic mapping of wheeled robots that fuses point clouds and images, and the method includes:

[0050] S1: Use the target detection convolutional neural network to extract the bounding box identifying the position of the object and the corresponding semantic category and confidence level from the image read by the monocular camera;

[0051] S2: Segment the two-dimensional point cloud read by the single-line lidar based on geometric features to obtain point cloud segmentation;

[0052] S3: Match the bounding box with the point cloud segmentation, and combine the matched point cloud segmentation, semantic category and confidence corresponding to the bounding box into a semantic laser;

[0053] S4: Taking semantic laser as input, using laser SLAM algorithm to construct a two-dimensional grid map, adding a semantic structure composed of cumulative confidence and update times of each category to the grid containing only occupancy proba...

Embodiment 2

[0086] Based on the same inventive concept, this embodiment provides a wheeled robot semantic mapping system that fuses point clouds and images, please refer to Figure 5 , the system consists of:

[0087] The semantic extraction module 201 is used to extract the bounding box identifying the position of the object and the corresponding semantic category and confidence level from the image read by the monocular camera by using the target detection convolutional neural network;

[0088] The point cloud segmentation module 202 is used to segment the two-dimensional point cloud read by the single-line lidar based on geometric features to obtain point cloud segmentation;

[0089] The semantic matching module 203 is used to match the bounding box with the point cloud segmentation, and combine the matched point cloud segmentation, semantic category and confidence corresponding to the bounding box into a semantic laser;

[0090] The two-dimensional grid map construction module 204 is u...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a wheeled robot semantic mapping method and system fusing point cloud and images, solves the problem that a map constructed by laser SLAM only supports robot navigation, constructs a map embedded with semantic annotations, and belongs to the field of mobile robot SLAM. The main content of the method is that semantics are extracted from a two-dimensional point cloud, and semantics is stored, updated and optimized in a two-dimensional grid map. The method mainly comprises the steps of target detection based on deep learning on an image, point cloud segmentation, boundingbox and segmentation matching, map unit semantic updating and semantic optimization based on SLAM global optimization and clustering. The method has the advantages of being real-time, simple in device, rich in map information and the like, and aims at achieving intelligent navigation and man-machine interaction of the indoor mobile robot.

Description

technical field [0001] The invention relates to the technical field of synchronous positioning and mapping, in particular to a semantic mapping method and system for a wheeled robot that fuses point clouds and images. Background technique [0002] With the rapid development of artificial intelligence, earth-shaking changes are taking place in the field of mobile robots, and the process of intelligentization of robots is also accelerating. The method of robots relying on SLAM algorithms to build maps that can be used for navigation is quite mature. When the environment is very complex or is in dynamic changes, robots need to have the ability to understand the scene and the various object parts in it. Semantic maps are the basic of this ability. The realization method is also the basis of intelligent navigation and human-computer interaction. [0003] In the prior art, SLAM (Simultaneous Localization And Mapping) technology can be divided into laser SLAM and visual SLAM accor...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62G06K9/34G01S17/89G01S17/06G01S17/86G01C21/00G06N3/04
CPCG01S17/89G01S17/06G01C21/005G06V10/267G06V10/757G06N3/045G06F18/23G06F18/253
Inventor 张沪寅黄凯郭迟
Owner WUHAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products