Robot autonomous localization and navigation based on image-text recognition and semantic meaning

A robot and graphic technology, applied in navigation, surveying and navigation, character and pattern recognition, etc., can solve the problems of robot position and track drift, cannot support autonomous navigation and navigation, etc., achieve high accuracy and promote human-machine communication , the effect of improving the accuracy

Active Publication Date: 2018-04-27
南京万云信息技术有限公司
View PDF6 Cites 37 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0015] In the SLAM system based on vision or lidar, due to the error accumulation of the sensor and the error accumulation in the visual image matching, the position and trajectory of the robot often drift, so that after the robot moves for a short time or after moving a small area, the error is already Can not support its autonomous navigation and navigation
In SLAM, in order to overcome this problem, a loop closure technology is provided to overcome the invalid positioning caused by the complexity of the environment and the accumulation of sensor errors. However, one robot does not necessarily have to follow a closed route, and the other robot does not It must be able to maintain a certain positioning effectiveness before the effective loop detection (that is, it has "flyed" before the loop detection)

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Robot autonomous localization and navigation based on image-text recognition and semantic meaning
  • Robot autonomous localization and navigation based on image-text recognition and semantic meaning
  • Robot autonomous localization and navigation based on image-text recognition and semantic meaning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0054] The present invention will be further described below in conjunction with embodiment.

[0055] The system designed by the invention is divided into an offline part and an online system.

[0056] Offline part:

[0057] It is mainly to complete the collection of various signs, especially indoor direction signs and warning signs, which are manually labeled (labelled) and stored in the database. Through the machine learning scheme, the algorithm memorizes the graphics, image features and features of these signs. semantic information. it includes:

[0058] 1) Collection: It can be collected on the network or manually on-site

[0059] 2) Labeling: Manually label the attributes and semantics of the collected signs, such as the shape and meaning of the direction signs, signs of passage and signs of prohibition, etc.

[0060] 3) Training: through machine learning methods, the algorithm can obtain the knowledge of these signs (attributes, concepts, semantics, etc.)

[0061] ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to autonomous localization and navigation of mobile intelligent equipment such as service robots, unmanned aerial vehicles, automatic guided vehicles and indoor three-dimensionalmodeling equipment. Compared with the existing system, image-text recognition and semantic binding for indoor symbols are introduced. The characteristics are that (1) the recognized result of image-text recognition is high in accuracy compared with laser radar or road signs for visual recognition; (2) only a camera is required to be added, and the cost and the weight are low; (3) an accumulativeerror of sensors (an IMU, laser radar, vision, etc.) is effectively eliminated, and the accuracy of loop detection is improved; (4) the robot localization and navigation accuracy is improved on the whole by combining image-text recognition, and commercial use of robots is enabled to become possible; and (5) tool software provided by using the method combines a result of image-text recognition, endows an environment map with semantic information through manual binding, can promote man-machine communication and helps the robots to complete advanced tasks.

Description

technical field [0001] The present invention relates to the field of autonomous positioning and navigation of mobile smart devices such as service robots, unmanned aerial vehicles, automatic guided vehicles, and indoor 3D modeling equipment, and in particular to the positioning and navigation of smart mobile devices when there is no GPS signal indoors , this kind of positioning and navigation does not need to provide the floor plan of the environment in advance, and does not need to deploy wireless networks such as UWB and ZIGBEE networks indoors. Background technique [0002] The demand for high-precision positioning comes from the booming and rapidly developing field of robots and wearable devices. For example, robots include housekeeping robots, sweeping robots, and logistics robots. Compared with human positioning, positioning is just needed for these devices, and The diversity of application scenarios cannot be realized by special equipment such as UWB base stations and...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/32G01C11/00G01C21/00
CPCG01C11/00G01C21/00G06V20/63
Inventor 王庆文
Owner 南京万云信息技术有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products