Mobile robot indoor autonomous localization method combining scene point line features

A mobile robot and autonomous positioning technology, applied in the direction of instruments, image data processing, computing, etc., can solve the problems of black holes and inaccurate depth values, and achieve the effect of simple and cheap positioning equipment, solving positioning failure, and strong robustness

Inactive Publication Date: 2019-03-19
SHANGHAI UNIV
View PDF3 Cites 11 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Adaptively extract the ORB (Oriented FAST and Rotated BRIEF) feature points of the scene for the color image, and perform bilateral filtering on the depth image to solve the problems of black holes and inaccurate depth values ​​in the depth image

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Mobile robot indoor autonomous localization method combining scene point line features
  • Mobile robot indoor autonomous localization method combining scene point line features
  • Mobile robot indoor autonomous localization method combining scene point line features

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0034] Embodiment one: see Figure 1 to Figure 4 , this mobile robot indoor autonomous positioning method combined with scene point line features, its specific operation steps are as follows:

[0035] 1) Equipment installation and data collection: fix the depth camera sensor on the top of the mobile robot, install universal wheels that can move in any direction at the bottom of the robot, and place a computer inside to process the environmental data acquired by the depth camera;

[0036] 2) Point and line feature extraction and matching of the scene where the robot is located: the depth camera acquires the color image and depth image of the scene where the robot is located, performs denoising preprocessing on the collected image, extracts and matches the point features and line features of the color image;

[0037] 3) Calculation of feature depth information and handling of missing problems: the depth camera calculates depth information and solves the problem of missing depth ...

Embodiment 2

[0040] Embodiment 2: This embodiment embodiment is basically the same as Embodiment 1, and the special features are:

[0041] In the described step 1), the depth camera and the universal wheel mobile chassis communicate with the computer through the robot operating system ROS interface; all data calculation and processing are carried out in the ROS system; the depth image collected by the depth camera equipment is denoised and preprocessed The method adopts the bilateral filtering algorithm.

[0042] In the step 2), the extracted point features are ORB features, described by the BRIEF descriptor, and feature point matching is performed using the K nearest neighbor (KNN) algorithm according to the binary descriptor of the feature point. The extracted line features are LSD features, which are described by LBD descriptors, and the appearance and geometric constraints of line features are used to perform effective line matching; the ORB feature extraction algorithm uses an adaptiv...

Embodiment 3

[0046] Embodiment 3: The indoor autonomous positioning method of the mobile robot combined with scene point line features is as follows:

[0047] Such as figure 2 As shown, the depth camera (RGB-D) is built on the mobile robot platform, and the depth camera in the specific embodiment can adopt the cheap Kinect v2 depth camera of Microsoft Corporation, and this camera has 1920 * 1080 color image resolution, 512 * 424 depth The image resolution, the forward-looking horizontal viewing angle is 70 degrees, and the vertical viewing angle is 60 degrees, which can meet the requirements of the positioning method of the present invention. The mobile platform adopts Kobuki mobile chassis, which has a 3-axis digital gyroscope with a measurement range of ±250 degrees / second. This mobile chassis can meet the requirements of autonomous positioning and subsequent navigation. The collection of sensor data, data processing and calculation, and background data optimization are all completed b...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to an indoor autonomous positioning method of a mobile robot which combines the point and line characteristics of a scene. The method has strong scene adaptability, which uses adepth camera mounted on a mobile robot to collect scene video data, and extracts the point and line features of each frame of the scene to complete the subsequent positioning calculation. Generally, point features perform well in clear texture, rich feature points and unobstructed scenes. However, in the scene with fewer texture feature points, there is a problem of scarcity of feature points, andthe robustness of the simple feature point location method is poor. The invention extracts more stable object line features in the scene with missing texture, thereby ensuring to provide more abundant feature information to realize the calculation of the visual odometer, thereby obtaining the position information of the mobile robot and achieving the purpose of autonomous positioning.

Description

technical field [0001] The invention belongs to the field of autonomous navigation of mobile robots, and relates to an indoor autonomous positioning method of a mobile robot combined with scene point and line features, using a depth camera installed on the mobile robot to perform indoor autonomous positioning and pose acquisition of the robot. This method makes full use of the indoor scene data information, and at the same time extracts the point and line features of the scene. Point features have excellent performance in scenes with clear textures, rich feature points, and no occlusion. However, in scenes with fewer texture feature points, the method of pure feature points is less robust. This method extracts more stable object line features in the texture-deficient scene, so as to ensure the provision of richer feature information to realize the calculation of visual odometry based on the depth camera, so as to obtain the position information of the mobile robot. Backgrou...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/73G06T5/00G06T7/33
CPCG06T5/002G06T2207/20028G06T7/33G06T7/73
Inventor 田应仲李昂松李龙
Owner SHANGHAI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products