Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Robot image localization method and system based on deep learning

A deep learning and image positioning technology, which is applied in image analysis, image enhancement, image data processing, etc., can solve problems such as positioning failure, and achieve the effects of improving accuracy, extracting and matching features, and high accuracy

Active Publication Date: 2019-02-26
SHENZHEN QIANHAI YYD ROBOT CO LTD
View PDF6 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The purpose of the present invention is to provide a robot image positioning method and system based on deep learning to solve the problem of failure of positioning when the robot cannot capture feature information indoors

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Robot image localization method and system based on deep learning
  • Robot image localization method and system based on deep learning
  • Robot image localization method and system based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0037] The technical solutions of the present invention will be further described below in conjunction with the embodiments and the accompanying drawings. Such as figure 1 Shown, be the flow chart of the robot image localization method based on deep learning of the present invention, mainly comprise the following steps in the present invention:

[0038] A. In-depth training and learning to obtain image models and matching rules;

[0039] B. The robot collects environmental images in azimuth, and performs feature extraction and feature matching on the collected images according to the obtained image model and matching rules to obtain the three-dimensional coordinates of all matching points, and obtain the common key point information of at least two images;

[0040] C. Use a random sampling consensus algorithm to eliminate the mismatched points in the matching points in the above image, and then obtain the three-dimensional coordinates of at least two key points from the image...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a robot image positioning method based on deep learning. The method at least comprises the following steps that A) deep training and learning are carried out to obtain an image model and a matching rule; B) a robot collects environment images in different orientations gradually, feature extraction and matching are carried out on the collected images, according to the obtained image model and matching rule, to obtain 3D coordinates of all matching points, and common key point information of at least two images is obtained; C) mismatching points are rejected from the matching points of the images via a random sampling consistency algorithm, and 3D coordinates of at least two key points are obtained from the images; D) the 3D coordinates of the key points are obtained by using a bubble sorting method; and E) a rotation matrix and an offset vector in every two orientations are obtained via an absolute orientation algorithm, and a moving track of the robot is positioned via iteration successively. The problem of positioning failure caused by that the robot cannot capture feature information indoor is solved. The invention also provides a deep learning based robot image positioning system using the robot image positioning method.

Description

technical field [0001] The present invention relates to the technical field of image positioning, in particular to a robot image positioning method and positioning system based on deep learning. Background technique [0002] In vision-based robot navigation and positioning, the extraction and matching of feature information is the primary key technology, and the quality of feature extraction will directly affect the accuracy of robot positioning. The commonly used image features include color features, texture features, shape features, and spatial relationship features. Common methods for extracting color features include color histogram, color set, color distance, color aggregation vector, and color correlation graph. Common methods for extracting texture features include gray-level co-occurrence matrix, geometric method, model method, and signal processing method. Extracting shape features The commonly used methods are boundary feature method, Fourier shape description me...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T7/73
CPCG06T2207/10016G06T2207/20081G06T2207/30241
Inventor 李明明吴龙飞
Owner SHENZHEN QIANHAI YYD ROBOT CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products