Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A visual inertial navigation slam method based on ground plane assumption

A ground-level and ground-based technology, applied in image analysis, instruments, calculations, etc., can solve the problems of decreased accuracy and continuity, and achieve the effect of eliminating accumulated errors and improving accuracy

Active Publication Date: 2021-09-03
NORTHEASTERN UNIV LIAONING
View PDF7 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the accuracy and coherence of traditional pure vision-based SLAM will drop significantly after illumination changes and occlusions.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A visual inertial navigation slam method based on ground plane assumption
  • A visual inertial navigation slam method based on ground plane assumption
  • A visual inertial navigation slam method based on ground plane assumption

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0143] The present invention will be further elaborated below in conjunction with the accompanying drawings of the description.

[0144] The present invention proposes a Visual-Inertial SLAM method (VIGO) based on ground plane assumptions, adding feature points on the ground and feature points on plane road signs as map features to realize SLAM. In order to make the positioning more robust and continuous, and restore the real scale, this paper adds an inertial sensor and adds the pre-integration data of the IMU to the optimization framework. In this way, the estimation of camera pose can be globally restricted, which greatly improves the accuracy. In addition, in the reconstructed 3D map, the ground area can be clearly constructed to provide richer information for subsequent AR applications or robot applications. The overall framework is as figure 1 shown.

[0145] The present invention is based on the visual inertial navigation SLAM method of ground plane hypothesis, compr...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention relates to a visual inertial navigation SLAM method based on ground plane assumption, which extracts feature points from images to perform IMU pre-integration, establishes a camera projection model, and performs camera internal parameter calibration and external parameter calibration between IMU and camera; system initialization , align the visually observed point cloud and camera pose to the IMU pre-integration, restore the ground equation and camera pose; initialize the ground to get the ground equation, determine the ground equation under the current camera pose, and back-project to the image coordinates system to obtain more accurate ground areas; based on state estimation, deduce the observation models of each sensor, integrate camera observations, IMU observations and ground feature observations for state estimation, use graph optimization models for state estimation, and use sparse graphs optimization and gradient descent method to achieve the whole optimization. Compared with the previous algorithm, the present invention has a greater improvement in accuracy, and can limit the estimation of camera pose globally, so that the accuracy is greatly improved.

Description

technical field [0001] The invention relates to a positioning and mapping technology, in particular to a visual inertial navigation SLAM method based on ground plane assumptions. Background technique [0002] The full name of SLAM is simultaneous positioning and map construction. The camera collects images in real time, estimates the camera's motion trajectory through frame-by-frame images, and reconstructs a map of the camera motion scene. Traditional visual SLAM uses points and lines with obvious color changes in the scene as landmarks on the map, which have no practical meaning and no contextual semantics, and are seriously affected by lighting and pedestrian occlusion in the shopping mall environment. In order to allow the robot to move freely in the indoor and outdoor environments, and to integrate AR applications into the scene more realistically, SLAM has become a research hotspot in recent years, and the monocular camera has small size, low cost, and can be easily em...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T7/73G06T7/80
CPCG06T7/73G06T7/80
Inventor 于瑞云杨硕石佳
Owner NORTHEASTERN UNIV LIAONING
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products