Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Reinforcement Learning Path Planning Method Using Artificial Potential Field

A technology of reinforcement learning and path planning, applied in navigation calculation tools, two-dimensional position/course control, vehicle position/route/altitude control, etc. It can solve problems such as multiple invalid iterations, insufficient environmental exploration, and difficult convergence. Achieve the effect of reducing the number of iterations, improving the stability of the convergence results, and shortening the path planning time

Active Publication Date: 2022-08-05
HUBEI UNIV OF AUTOMOTIVE TECH
View PDF10 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The traditional Q-learning algorithm has the following problems: (1) During the initialization process, all Q values ​​are set to 0 or random values, which makes the agent only search blindly at the initial stage, resulting in There are too many invalid iterations in the stage; (2) ε-greedy strategy is adopted in the action selection, too large ε value will make the agent explore more environments and it is not easy to converge, and too small ε value will cause the agent to be sensitive to the environment. Finding a suboptimal solution due to insufficient exploration, it is difficult to balance the relationship between exploration and utilization

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Reinforcement Learning Path Planning Method Using Artificial Potential Field
  • A Reinforcement Learning Path Planning Method Using Artificial Potential Field
  • A Reinforcement Learning Path Planning Method Using Artificial Potential Field

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0035] Embodiments of the technical solutions of the present invention will be described in detail below with reference to the accompanying drawings. The following examples are only used to more clearly illustrate the technical solutions of the present invention, and are therefore only used as examples, and cannot be used to limit the protection scope of the present invention.

[0036] see figure 1 , a reinforcement learning path planning method by introducing an artificial potential field provided by the present invention, the method steps are as follows:

[0037] Step 1: Segment the environment image obtained by the mobile robot, divide the image into a 20×20 grid, and use the grid method to build an environment model. If an obstacle is found in the grid, define the grid as an obstacle If the target point is found in the grid, the grid is set as the target position, which is the final position that the mobile robot will reach; other grids are defined as grids without obsta...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a reinforcement learning path planning method by introducing an artificial potential field. Initialize the algorithm parameters; S3, use the dynamic factor adjustment strategy to select the action; S4, execute the action, update the Q value; S5, repeat the third and fourth steps until a certain number of steps or a certain convergence condition is reached; S6, each step The action with the largest Q value is selected to obtain the optimal path; S7, the optimal path is sent to the controller of the mobile robot, and the mobile robot is controlled to walk according to the optimal path. Compared with the traditional algorithm, the improved Q-learning algorithm in the present invention shortens the path planning time by 85.1%, reduces the number of iterations before convergence by 74.7%, and at the same time improves the stability of the convergence result of the algorithm.

Description

technical field [0001] The invention relates to the technical field of robot path planning, in particular to a reinforcement learning path planning method by introducing an artificial potential field. Background technique [0002] With the development of science and technology, more and more mobile robots have entered people's daily life. The path planning problem of mobile robots has also received more and more attention. The path planning technology can help the robot avoid obstacles to plan an optimal movement route from the starting point to the target point under the condition of referring to a certain index. According to the known degree of environmental knowledge in the path planning process, path planning can be divided into global path planning and local path planning. Among them, the widely used global path planning algorithms include A* algorithm, dijkstra algorithm, visual graph method, free space method, etc.; local path planning algorithms include artificial ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G01C21/20G05D1/02
CPCG01C21/20G05D1/0088G05D1/0221
Inventor 王科银石振张建辉杨正才
Owner HUBEI UNIV OF AUTOMOTIVE TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products