Monocular SLAM (Simultaneous Localization and Mapping) method capable of creating large-scale map

A large-scale, map technology, applied in the field of map creation, can solve problems such as inability to guarantee flexibility, and achieve the effect of ensuring comprehensiveness and high resolution, reducing information loss, and accurate pose

Pending Publication Date: 2021-12-14
NORTHEAST FORESTRY UNIVERSITY
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

On the other hand, zoom sensors, such as depth or stereo cameras, have certain limitations, they can provide reliable measurements without guaranteeing their flexibility

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0028] The present invention provides a monocular SLAM method capable of creating a large-scale map, which specifically includes the following steps:

[0029] a. Obtain the upper image information of the space required for mapping through the TOF depth camera, structured light depth camera and obstacle avoidance camera, and obtain the first-level environment image;

[0030] b. Obtain the non-upper image information of the space required for mapping through 3D lidar, and obtain the secondary environment image;

[0031] c. Perform data processing on the first-level environment image, construct the initial environment map and identify the pose of the TOF depth camera, structured light depth camera and obstacle avoidance camera;

[0032] d. Using the inherent pose transformation relationship between the TOF depth camera, structured light depth camera, obstacle avoidance camera and 3D lidar, transform the pose of the TOF depth camera, structured light depth camera, and obstacle avo...

Embodiment 2

[0039] The present invention provides a monocular SLAM method capable of creating a large-scale map, which specifically includes the following steps:

[0040] a. Obtain the upper image information of the space required for mapping through the TOF depth camera and obstacle avoidance camera, and obtain the first-level environment image;

[0041] b. Obtain the non-upper image information of the space required for mapping through 3D lidar, and obtain the secondary environment image;

[0042] c. Perform data processing on the first-level environment image, construct the initial environment map and identify the pose of the TOF depth camera and obstacle avoidance camera;

[0043] d. Using the inherent pose transformation relationship between the TOF depth camera, obstacle avoidance camera and 3D lidar, transform the pose of the TOF depth camera and obstacle avoidance camera to obtain the pose of the 3D lidar;

[0044] e. Use the pose of the 3D lidar to transform the secondary enviro...

Embodiment 3

[0050] The present invention provides a monocular SLAM method capable of creating a large-scale map, which specifically includes the following steps:

[0051] a. Obtain the upper image information of the space required for mapping through the binocular stereo camera, the structured light depth camera and the obstacle avoidance camera, and obtain the first-level environment image;

[0052] b. Obtain the non-upper image information of the space required for mapping through 3D lidar, and obtain the secondary environment image;

[0053] c. Perform data processing on the first-level environment image, construct the initial environment map and identify the pose of the binocular stereo camera, structured light depth camera and obstacle avoidance camera;

[0054] d. Using the inherent pose transformation relationship between the binocular stereo camera, structured light depth camera, obstacle avoidance camera and 3D lidar, transform the pose of the binocular stereo camera, structured ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a monocular SLAM method capable of creating a large-scale map, and the method comprises the steps: obtaining the upper end image information of a space needing to be mapped through an image collection device, obtaining the non-upper end image information of the space needing to be mapped through a 3D laser radar, and carrying out the data processing of an environment image, and constructing an initial environment map and identifying the pose of the image acquisition device. Wide-angle rotary scanning and fixed-point detection are achieved through the 3d laser radar, a large-distance blind area is avoided in the detection process, the comprehensiveness and high resolution of detection data are guaranteed, the positioning precision of monocular SLAM is optimized, richer map information is created, two initial images are obtained through shooting of the two image acquisition devices, the initial SLAM map is constructed by utilizing the mutually matched feature points in the initial image, and after the initialization is successful, the image is shot by utilizing the image acquisition device so as to perform monocular SLAM mapping, so that the mapping success rate is improved, and the information loss in the map is reduced.

Description

technical field [0001] The invention relates to the field of map creation, in particular to a monocular SLAM method capable of creating large-scale maps. Background technique [0002] In recent years, with the further development of computer technology, digital image processing technology and image processing hardware, computer vision has begun to receive widespread attention in the field of robotics. SLAM is the abbreviation of simultaneous localization and map construction. This concept was first proposed by Smith, Self and Cheeseman in 1988. This method describes the situation in which the robot starts from an unknown location in an unknown environment and then explores the unknown environment: the robot repeatedly observes the environment during the movement, and then locates its own pose and posture according to the environmental characteristics sensed by the sensor, and then according to its own Pose-incremental map building. Real-time monocular SLAM has become an in...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T17/05G06T7/80G06T7/55
CPCG06T17/05G06T7/85G06T7/55G06T2207/10012
Inventor 段苏洋苗芷萱姜天奇宁馨吴雨薇
Owner NORTHEAST FORESTRY UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products