Unlock instant, AI-driven research and patent intelligence for your innovation.

Visual inertial positioning method based on point-line feature fusion

A feature fusion and inertial positioning technology, applied in image analysis, image enhancement, instruments, etc., can solve problems such as real-time pose estimation defects, cumulative error elimination, and insufficient scene feature extraction, so as to improve feature detection efficiency and accuracy Effect

Pending Publication Date: 2022-05-03
SHENZHEN INST OF ADVANCED TECH CHINESE ACAD OF SCI
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Wei et al. proposed a new positioning method based on point-line feature fusion, using the FAST corner point and FLD line feature detection algorithms in the point feature and line feature extraction stages respectively. The scheme is higher than other schemes in the test without loopback detection, but there are still some defects in the cumulative error of longer distances
PL-SLAM This method is based on ORB-SLAM, on which LSD-LBD line features are added, but compared with other mainstream solutions, the IMU module is not added, which will have an impact on the accuracy of pose estimation.
Based on VINS-Mono, PL-VIO proposes to minimize the cost function to optimize the state, but this solution does not meet the real-time requirements, and there are still defects in real-time pose estimation.
[0003] In actual autonomous driving, there are often scenes with weak textures and motion blur. In order to effectively ensure the accuracy of self-positioning in complex scenes, the existing strategies based on point feature and line feature fusion still have weak points in the visual positioning method. In the texture environment, there is a problem of insufficient extraction of scene features, and most of the schemes cannot meet the actual needs in terms of real-time performance, and some schemes even lack closed-loop detection strategies, which will not be able to eliminate the cumulative error

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Visual inertial positioning method based on point-line feature fusion
  • Visual inertial positioning method based on point-line feature fusion
  • Visual inertial positioning method based on point-line feature fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0027] The present invention will be described in detail below in conjunction with the accompanying drawings and specific embodiments.

[0028] In order to improve the accuracy of self-positioning in weak texture scenes and ensure the real-time performance of the system, the present invention uses a monocular camera and an IMU as the hardware basis, optimizes point feature extraction and line feature extraction algorithms, and uses optical flow tracking Obtain the corresponding relationship between features, combined with IMU information, so as to ensure the premise of pose estimation, use multiple information joint optimization at the back end to achieve the accuracy of pose estimation, the present invention is a visual inertia based on point-line feature fusion positioning method, the process is as follows figure 1 Shown: Specifically include the following steps:

[0029] Step 1, first initialize the system to obtain a better initial value, so as to provide a stable initial...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a visual inertial positioning method based on point-line feature fusion, and the method specifically comprises the following steps: 1, carrying out system initialization, and obtaining an initial state; 2, performing optimization estimation on the initial state obtained in the step 1 by adopting a sliding window estimation method; and step 3, carrying out closed-loop detection on the result of optimization estimation in the step 2. According to the method, the weak texture scene is constrained through the line features, the recognition degree of the scene features is improved, the vehicle is more accurately positioned by combining the tight coupling scheme of the IMU, and thus more effective guarantee is provided for tasks such as follow-up track prediction.

Description

technical field [0001] The invention belongs to the technical fields of visual SLAM and intelligent driving, and relates to a visual inertial positioning method based on point-line feature fusion. Background technique [0002] In vision-based positioning methods, it is mainly divided into traditional geometric methods and learning-based methods. Regardless of performance and robustness, traditional methods are generally superior to learning-based methods. In recent years, multi-feature fusion has also appeared. For example, PL-VINS is the first real-time monocular inertial positioning method based on point features and line features. This method is based on VINS-Mono, and LSD line features are introduced on the basis of it, and line feature constraints are used for Back-end nonlinear pose optimization. Wei et al. proposed a new positioning method based on point-line feature fusion, using the FAST corner point and FLD line feature detection algorithms in the point feature an...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/73G06T7/246G06T7/269G06T7/13G01C21/16
CPCG06T7/73G06T7/246G06T7/269G06T7/13G01C21/1656G06T2207/10016G06T2207/20164G06T2207/30252
Inventor 马科伟张锲石程俊任子良康宇航马宁
Owner SHENZHEN INST OF ADVANCED TECH CHINESE ACAD OF SCI