Visual navigation method based on deep reinforcement learning and direction estimation

A technology of reinforcement learning and direction estimation, which is applied in the field of visual navigation based on deep reinforcement learning and direction estimation, and can solve problems such as abnormal navigation and the inability of intelligent robots to quickly build maps.

Active Publication Date: 2021-09-14
SOUTH CHINA UNIV OF TECH
View PDF16 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, in unknown and complex scenes, intelligent robots cannot navigate normally due to their inability t

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Visual navigation method based on deep reinforcement learning and direction estimation
  • Visual navigation method based on deep reinforcement learning and direction estimation
  • Visual navigation method based on deep reinforcement learning and direction estimation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0050] The present invention will be further described in detail below in conjunction with the embodiments and the accompanying drawings, but the embodiments of the present invention are not limited thereto.

[0051] Such as figure 1 As shown, the visual navigation method based on deep reinforcement learning and direction estimation provided in this embodiment includes the following steps:

[0052] 1) Generate AI2-THOR simulation platform offline data set: Generate AI2-THOR simulation environment data to generate offline data set through the script of simulating robot movement. The offline data set contains RGB-D image and robot position information, among which RGB-D image D represents the depth image, RGB-D image contains RGB image and depth image, the specific process is as follows:

[0053] 1.1) Download the python package of AI2-THOR, use the corresponding command to download 30 simulation scenes of AI2-THOR, take 25 of them as the training set, and the remaining 5 as th...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a visual navigation method based on deep reinforcement learning and direction estimation. The method comprises the following steps: 1) generating an AI2-THOR simulation platform offline data set; 2) extracting image features of the RGB-D image of the offline data set; 3) constructing an A3C deep reinforcement learning model, and taking the features in the step 2) as the input of the model to train the model; 4) migrating the model trained in the step 3) according to the real scene, performing fine tuning on the model to improve the generalization ability, and finally applying the fine-tuned model to the visual navigation of the real scene. According to the method, a three-dimensional geometric method is used, the position and the direction of navigation to the target object are calculated through a direction estimation method, direction features are provided for a deep reinforcement learning model, the model can converge more quickly, the generalization ability is higher, and accurate visual navigation is achieved.

Description

technical field [0001] The invention relates to the technical field of robot visual navigation, in particular to a visual navigation method based on deep reinforcement learning and direction estimation. Background technique [0002] At present, the indoor navigation technology based on mapping and navigation has been gradually improved and applied to daily life scenarios, such as the use of sweeping robots and shopping mall service robots. However, in unknown and complex scenes, intelligent robots cannot navigate normally due to their inability to quickly build a map. This is currently the biggest challenge that intelligent robots face in indoor navigation. From the perspective of human's indoor navigation method, researchers have developed a new navigation method, that is, a visual navigation method without a map. The vision-based navigation method can better adapt to new unknown and complex environments. , intelligent robots can autonomously explore and navigate to the vi...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06F30/27G06K9/46G06K9/62G06N3/04G06N3/08
CPCG06F30/27G06N3/08G06N3/044G06N3/045G06F18/214
Inventor 毕盛罗超董敏钟浩钊
Owner SOUTH CHINA UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products