Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Mobile robot visual following method based on deep reinforcement learning

A mobile robot and reinforcement learning technology, applied in the field of intelligent robots, can solve problems such as high hardware cost and design cost, increase system cost, system performance impact, etc., achieve good lighting changes, improve the level of intelligence, and reduce the effect of possibility

Active Publication Date: 2019-08-02
NORTHEASTERN UNIV
View PDF6 Cites 18 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] At present, research on following robot systems is usually based on visual sensors or multi-sensor combinations. The former usually uses a stereo camera to collect visual images, which requires cumbersome calibration steps and is difficult to adapt to strong outdoor lighting; the latter is improved due to the addition of additional sensors. System cost also brings complex data fusion process
In order to ensure the robustness of tracking in dynamic unknown environments, it is usually necessary to manually design complex features, which greatly increases the labor cost, time cost and computing resources
In addition, the traditional following robot system usually divides the whole system into two parts: the target tracking module and the robot motion control. As a result, the accumulation of errors is gradually enlarged, which will eventually have a greater impact on system performance.
[0004] To sum up, the current traditional follower robot system has the disadvantages of high hardware cost and design cost, and cannot fully adapt to the variability and complexity of indoor and outdoor environments with the support of simple hardware, and it is easy for the robot to follow the target person. Reduce the robustness of the following system, thus seriously affecting the application of following robots in real life

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Mobile robot visual following method based on deep reinforcement learning
  • Mobile robot visual following method based on deep reinforcement learning
  • Mobile robot visual following method based on deep reinforcement learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0046] The software environment of this embodiment is the Ubuntu14.04 system, the mobile robot adopts the TurtleBot2 robot, and the input sensor of the robot is a monocular color camera with a resolution of 640×480.

[0047] Step 1: Dataset automatic construction process

[0048] For the direction control model of the supervised following robot in the present invention, the input is the camera view image of the following robot, and the output is the action that the robot should take at the current moment. The construction process of the entire dataset includes two parts: the acquisition of the input field of view image and the labeling of the output action.

[0049] Prepare a simple scene where the object being followed needs to be relatively easy to distinguish from the background. In a simple scenario, multiple images of the target person at different positions in the robot's field of view are collected from the field of view of the following robot. Download a certain numb...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a mobile robot visual following method based on deep reinforcement learning. A framework of "supervised pre-training of simulated images + model migration + RL" is adopted. Themethod comprises the following steps: firstly, collecting a small amount of data in a real environment, and automatically expanding a data set by adopting a computer program and an image processing technology, so as to obtain a large amount of analog data sets capable of adapting to a real scene in a short time for supervising and training a direction control model of a following robot; secondly,building a CNN model for robot direction control, and carrying out supervised training on the CNN model by using an automatically constructed analog data set to enable the CNN model to serve as a pre-training model; and then, migrating knowledge of the pre-training model to the DRL-based control model, so that the robot executes a following task in a real environment. By combining a reinforcementlearning mechanism, the robot can follow while improving the direction control performance in an environment interaction process, so that the robustness is high, and the cost is greatly reduced.

Description

technical field [0001] The invention belongs to the technical field of intelligent robots, and relates to a mobile robot vision following method based on deep reinforcement learning. Background technique [0002] With the advancement of technology and the development of society, more and more intelligent robots appear in people's lives. The following robot is one of the new systems that have received widespread attention in recent years. It can be used as an assistant to its owner in complex environments such as hospitals, markets or schools, and it will follow the movement, which will bring great benefits to people's lives. Great convenience. The following robot should have autonomous perception, recognition, decision-making and movement functions, be able to identify a specific target, and combine with the corresponding control system to realize the following of the target in complex scenes. [0003] At present, research on following robot systems is usually based on vis...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62G06N3/04G05D1/12
CPCG05D1/12G06N3/045G06F18/2155G06F18/214
Inventor 张云洲王帅庞琳卓刘及惟王磊
Owner NORTHEASTERN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products