Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Reinforcement learning path planning method introducing artificial potential field

A technology of reinforcement learning and artificial potential field, applied in navigation calculation tools, two-dimensional position/course control, vehicle position/route/altitude control, etc., can solve the problem of insufficient environmental exploration, difficulty in balancing exploration and utilization, and difficulty in convergence And other issues

Active Publication Date: 2021-02-09
HUBEI UNIV OF AUTOMOTIVE TECH
View PDF10 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The traditional Q-learning algorithm has the following problems: (1) During the initialization process, all Q values ​​are set to 0 or random values, which makes the agent only search blindly at the initial stage, resulting in There are too many invalid iterations in the stage; (2) ε-greedy strategy is adopted in the action selection, too large ε value will make the agent explore more environments and it is not easy to converge, and too small ε value will cause the agent to be sensitive to the environment. Finding a suboptimal solution due to insufficient exploration, it is difficult to balance the relationship between exploration and utilization

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Reinforcement learning path planning method introducing artificial potential field
  • Reinforcement learning path planning method introducing artificial potential field
  • Reinforcement learning path planning method introducing artificial potential field

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0035] Embodiments of the technical solutions of the present invention will be described in detail below in conjunction with the accompanying drawings. The following examples are only used to illustrate the technical solutions of the present invention more clearly, and therefore are only examples, rather than limiting the protection scope of the present invention.

[0036] refer to figure 1 , a reinforcement learning path planning method that introduces an artificial potential field provided by the present invention, the method steps are as follows:

[0037] Step 1: Segment the environmental image obtained by the mobile robot, divide the image into 20×20 grids, and use the grid method to establish the environment model. If an obstacle is found in the grid, define the grid as an obstacle If the target point is found in the grid, the grid is defined as the target position, which is the final position to be reached by the mobile robot; other grids are defined as grids without o...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a reinforcement learning path planning method for introducing an artificial potential field, and the method comprises the following steps: S1, building a grid map, introducinga gravitational field function to initialize a state value, and obtaining a simulation environment for training a reinforcement learning intelligent agent; S2, initializing algorithm parameters; S3, selecting an action by adopting a dynamic factor adjustment strategy; S4, executing an action, and updating a Q value; S5, repeatedly executing the step 3 and the step 4 until a certain step number ora certain convergence condition is reached; S6, selecting the action with the maximum Q value in each step, and obtaining the optimal path; S7, sending the optimal path to a controller of the mobile robot, and controlling the mobile robot to walk according to the optimal path. Compared with a traditional algorithm, the improved Q-learning algorithm has the advantages that the path planning time isshortened by 85.1%, the number of iterations before convergence is reduced by 74.7%, and meanwhile, the convergence result stability of the algorithm is also improved.

Description

technical field [0001] The invention relates to the technical field of robot path planning, in particular to a reinforcement learning path planning method that introduces an artificial potential field. Background technique [0002] With the development of science and technology, more and more mobile robots have entered people's daily life. The problem of path planning for mobile robots has also received more and more attention. Path planning technology can help the robot avoid obstacles and plan an optimal movement route from the starting point to the target point under the condition of referring to a certain index. According to the known degree of environmental knowledge in the path planning process, path planning can be divided into global path planning and local path planning. Among them, the widely used global path planning algorithms include A* algorithm, dijkstra algorithm, visual graph method, free space method, etc.; the local path planning algorithms include artif...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G01C21/20G05D1/02
CPCG01C21/20G05D1/0088G05D1/0221
Inventor 王科银石振张建辉杨正才
Owner HUBEI UNIV OF AUTOMOTIVE TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products