Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

An adaptive visual navigation method based on elm-lrf

A visual navigation and self-adaptive technology, applied in the field of self-adaptive visual navigation, can solve problems such as slow training speed, achieve the effect of fast training speed, less computing resources, and improve navigation speed

Active Publication Date: 2020-06-09
BEIHANG UNIV
View PDF6 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Deep reinforcement learning can be used for visual navigation, but one drawback is that the training speed is very slow

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • An adaptive visual navigation method based on elm-lrf
  • An adaptive visual navigation method based on elm-lrf
  • An adaptive visual navigation method based on elm-lrf

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0033] The technical solutions of the present invention will be further described below in conjunction with the accompanying drawings and embodiments.

[0034] Such as figure 1 As shown, the ELM-LRF neural network architecture is as figure 1 As shown, ELM-LRF is divided into two stages: ELM feature learning and ELM feature mapping. First, the input vector is randomly assigned, the pixels in the local receiving field are randomly selected, and the weights input to the hidden layer are also randomly given. Next, downsampling is performed. , and finally output.

[0035] Such as figure 2 Shown, a kind of adaptive visual navigation method based on ELM-LRF of the present invention, the flow process of specific embodiment is as follows:

[0036] Step 1: Allocate energy storage (s t ,a t ,r t ,s t+1 ,Q t ) structure space. Q t Initialized to 0, according to the current state s t , randomly select motion a t (the robot can move forward, backward, left, and right), the robo...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention designs an adaptive visual navigation method for a robot on the basis of an ELM-LRF neural network model. The method comprises the following steps: distributing (st, at, rt, st+1, and Qt) structure data storage space; enabling the robot to repeatedly move in a selected environment to obtain required structure data; preprocessing the data with the same states by deleting data with small Q values; then taking st as input and at as output to finish training to ELM-LRF, and establishing a mapping relation between a current state and an optimal action; and finally, testing the navigation ability of the robot by judging whether the robot finds out a target or not. By the method which is proposed by the ELF-LRF model under the data space, the navigation speed of the robot is greatly increased, wherein st is the current state, and refers to shot pictures herein; at is actions (movement in all directions) of the robot under st, rt is immediate reward of at, st+1 is the state of the robot after at, Q is long-term reward, and Qt value is total long-term reward obtained after at is carried out under the st state.

Description

technical field [0001] The present invention provides an adaptive visual navigation method based on ELM-LRF (extreme learning machine based on local receptive field), specifically input pixel data, output decision (select walking action), until the desired object is found, stop Down. It belongs to the field of machine learning, neural network algorithm, and reinforcement learning technology. Background technique [0002] Visual navigation is to install a monocular or binocular camera on the robot to obtain local images in the environment, realize self-position determination and path recognition, and then make navigation decisions, which is very similar to human visual feedback navigation. From input image to output action, machine learning is the core. With the continuous improvement of computer computing performance and the generation of more and more data, it is inevitable to mine the value of data to serve people's lives. Under this trend, "big data" and "artificial in...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G01C21/20G06N3/08
CPCG01C21/20G06N3/084
Inventor 王磊赵行李婵颖
Owner BEIHANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products