Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

RANSAC algorithm-based visual localization method

A visual positioning and algorithm technology, applied in the direction of photo interpretation, navigation and calculation tools, etc., can solve the problems of many iterations, slow positioning speed, long calculation time, etc., to reduce the number of iterations, improve the positioning speed, and improve the calculation speed. Effect

Active Publication Date: 2015-05-06
严格集团股份有限公司
View PDF2 Cites 20 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0010] The purpose of the present invention is to solve the problem that the traditional RANSAC algorithm has a large number of iterations, a large amount of calculation, and a long calculation time, which leads to the slow positioning speed of the visual positioning method realized by this algorithm, and proposes a visual positioning method based on the RANSAC algorithm

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • RANSAC algorithm-based visual localization method
  • RANSAC algorithm-based visual localization method
  • RANSAC algorithm-based visual localization method

Examples

Experimental program
Comparison scheme
Effect test

specific Embodiment approach 1

[0024] The visual positioning method based on the RANSAC algorithm of this embodiment, such as Image 6 As shown, the method goes through the following steps

[0025] Steps to achieve:

[0026] Step 1. Calculate the feature points and feature point description information of the image uploaded by the user to be located through the SURF algorithm;

[0027] Step 2. Select a picture with the most matching points in the database, perform SURF matching on the feature point description information of the image obtained in step 1 and the feature point description information of the picture, and define each pair of matching images and pictures as a pair Matching images, each pair of matching images will get a set of matching points after matching;

[0028] Step 3, through the RANSAC algorithm of matching quality, after removing the wrong matching points in the matching points of each pair of matching images in step 2, determine the 4 pairs of matching images that contain the largest...

specific Embodiment approach 2

[0030] Different from the first embodiment, the visual positioning method based on the RANSAC algorithm of this embodiment embodies the improved RANSAC algorithm, and through the improved RANSAC algorithm of the matching quality, the matching of each pair of matching images in step two The process of eliminating the wrong matching points in the points:

[0031] Step 31. Set the image uploaded by the user, such as figure 1 shown, with n 1 feature points, pictures in the database, such as figure 2 shown, with n 2 feature points, from n in the image 1 Select a feature point from the feature points, respectively with the n in the picture 2 feature points using the European calculation formula: i=1,2,...,n 1 Perform calculations to obtain n that is the same as the number of feature points in the picture 2 Euclidean distance; then, from n 2 Extract the minimum Euclidean distance d and the second small Euclidean distance from the first Euclidean distance, calculate the rati...

Embodiment 1

[0040] Carry out an implementation according to the content of the second specific implementation mode, specifically: figure 1 Schematic diagram of the feature points of the image uploaded by the user, in which there are 130 feature points, figure 2 It is a schematic diagram of the feature points of the database image, in which there are 109 feature points, and there are 90 pairs of matching points between them, such as Figure 4 The shown schematic diagram of matching points matched by the improved RANSAC algorithm of the method of the present invention obtains 82 pairs of correct matching points, 8 pairs of wrong matching points are removed, and the number of iterations is 1 time. The improved RANSAC algorithm takes 0.203 seconds, and the original RANSAC algorithm It takes 0.313 seconds and requires 2 iterations.

[0041] The schematic diagram of matching points without using the improved RANSAC algorithm for matching is as follows image 3 shown.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an RANSAC algorithm-based visual localization method which belongs to the field of visual localization. The traditional RANSAC algorithm has more iteration times, large calculation amount and long computation time, so that the visual localization method implemented by this algorithm has the problem of low localization speed. The RANSAC algorithm-based visual localization method comprises the following steps: calculating feature points of images uploaded by a user to be localized by an SURF algorithm and feature point description information; selecting one picture with the most matching points from a database, performing SURF matching on the obtained feature point description information of the images and the feature point description information of the pictures, defining each pair of images and pictures for matching as one pair of matching images, and obtaining a group of matching points after matching each pair of matching images; eliminating mistaken matching points in the matching points of each pair of matching images by the RANSAC algorithm of matching quality, and determining four pairs of matching images with the most correct matching points; calculating a position coordinate of the user by an epipolar geometric algorithm based on the obtained four pairs of matching images, so as to complete the indoor localization.

Description

technical field [0001] The invention relates to a visual positioning method based on RANSAC algorithm. Background technique [0002] With the advancement of science and technology and the improvement of living standards, mobile phones have become the standard equipment for people to travel, and location-based services have attracted more and more attention. In the existing positioning technology, the use of satellites for positioning is frequently used outdoors and has high accuracy, but in indoor environments, due to the influence of factors such as walls and other factors, the positioning effect is not ideal. In recent years, the Wifi-based positioning technology has had a greater impact on the indoor environment. It is because the Wi-Fi equipment is easy to deploy and easy to implement, but the Wi-Fi-based positioning is greatly affected by the environment. Large-scale equipment, etc., will have an impact on positioning accuracy. The vision-based positioning technology ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G01C21/20
CPCG01C11/08
Inventor 马琳万柯谭学治何晨光
Owner 严格集团股份有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products