Check patentability & draft patents in minutes with Patsnap Eureka AI!

Mobile robot visual navigation method and device based on deep reinforcement learning

A mobile robot and reinforcement learning technology, applied in neural learning methods, navigation calculation tools, instruments, etc., can solve the problems of high cost of lidar, weakened navigation performance, and inability to identify objects, so as to solve the problem of sparse rewards and speed up convergence. , the effect of improving the navigation ability

Pending Publication Date: 2022-05-24
SHENZHEN INST OF ADVANCED TECH CHINESE ACAD OF SCI
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

At present, the visual navigation technology of mobile robots based on deep reinforcement learning mainly has the following two problems: one is that the performance of visual navigation in large spaces is weak, and the other is that it is difficult to navigate in multiple different scenes at the same time.
However, this kind of technology will have problems such as weak signal and inaccurate positioning indoors, resulting in poor navigation effect, and the goal of simultaneously navigating indoors and outdoors cannot be achieved; Technology and technology based on visual navigation, complete simultaneous positioning and mapping through laser radar and visual sensors, and realize the navigation of mobile robots; have a clear understanding of the environment
[0004] The existing technology does not fully integrate the depth information in the image, the generalization ability and obstacle avoidance ability of unknown scene objects are poor, and the reward function design for deep reinforcement learning is relatively simple, and the problem of reward sparsity is very easy to occur, resulting in mobile robots. It is extremely difficult to reach the target point, resulting in slower training convergence speed, and the navigation performance is greatly weakened in complex large spaces

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Mobile robot visual navigation method and device based on deep reinforcement learning
  • Mobile robot visual navigation method and device based on deep reinforcement learning
  • Mobile robot visual navigation method and device based on deep reinforcement learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0045] In order to make those skilled in the art better understand the solutions of the present invention, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only Embodiments are part of the present invention, but not all embodiments. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.

[0046]It should be noted that the terms "first", "second" and the like in the description and claims of the present invention and the above drawings are used to distinguish similar objects, and are not necessarily used to describe a specific sequence or sequence. It is to be understood that the data so used may be interchanged under appropriate c...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to the field of machine vision navigation, in particular to a mobile robot vision navigation method and device based on deep reinforcement learning, and the method is based on a deep reinforcement learning method, takes an image, a depth image and a target position as input, and can realize navigation in a large space with multiple mixed scenes. The navigation capability of a mobile robot visual navigation technology based on deep reinforcement learning is improved; in addition, by designing the speed of the mobile robot and a reward function related to the distance between the mobile robot and the target, the training of the deep reinforcement learning model can be quickly converged; according to the method, the navigation capability of the deep reinforcement learning method in a complex large scene can be improved, the problem of sparse rewards is solved, the model convergence speed is accelerated, and the navigation performance in the complex large scene is improved.

Description

technical field [0001] The present invention relates to the field of machine vision navigation, in particular to a method and device for visual navigation of a mobile robot based on deep reinforcement learning. Background technique [0002] The visual navigation method of mobile robots in complex and large scenes based on deep reinforcement learning takes the currently observed image and target information as input, and outputs continuous actions to make the agent avoid obstacles and take a short path to the designated position. At present, the visual navigation technology of mobile robots based on deep reinforcement learning mainly has the following two problems: one is that the visual navigation performance in a large space is weak, and the other is that it is difficult to navigate in a variety of different scenarios at the same time. [0003] At present, the positioning technology that is relatively mature and widely used is the global positioning system positioning techn...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G01C21/20G06V10/40G06V10/82G06N3/04G06N3/08
CPCG01C21/20G06N3/08G06N3/044G06N3/045Y02T10/40
Inventor 张仪冯伟王卫军朱子翰
Owner SHENZHEN INST OF ADVANCED TECH CHINESE ACAD OF SCI
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More