Robot obstacle avoidance behavior learning method based on deep learning

A technology of deep learning and learning method, which is applied in the learning field of robot obstacle avoidance behavior, can solve the problems of large camera noise and poor use effect, and achieve the effects of reducing misjudgment, improving generalization ability, and improving inaccurate measurement depth

Pending Publication Date: 2020-07-17
FOSHAN NANHAI GUANGDONG TECH UNIV CNC EQUIP COOP INNOVATION INST +1
View PDF4 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, RGB-D cameras are noisy and are susceptible to interference from sun...

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Robot obstacle avoidance behavior learning method based on deep learning
  • Robot obstacle avoidance behavior learning method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0048] S1. Manipulate the robot to avoid obstacles in an unknown environment, collect RGB-D image data at a fixed frame rate through the Microsoft Kinect depth camera, and save them in time series;

[0049] S2, construct RGB image and Depth image fusion neural network model, input the collected RGB-D image data set into RGB image and Depth image fusion neural network model;

[0050] Specifically include:

[0051] S21. Select the pre-training model of AlexNet as the feature extraction model of the input data: the input data is the NYUDepth Dataset V2 image data set, and the pre-training features are obtained after the feature extraction of the pre-training model;

[0052] S22 reduces the dimensionality of the pre-trained features through an encoder: the network of the encoder consists of two convolutional layers and a batch regularization layer BN;

[0053] S23. Vectorize the feature map obtained from dimensionality reduction, and use canonical correlation analysis to fuse fea...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a robot obstacle avoidance behavior learning method based on deep learning, and the method comprises the steps: controlling a robot to carry out the obstacle avoidance movementin an unknown environment, collecting RGB-D image data at a fixed frame rate, and carrying out the naming and storage according to a time sequence; constructing an RGB image and Depth image fusion neural network model, and inputting the collected RGB-D image data set into the RGB image and Depth image fusion neural network model; setting hyper-parameters of the RGB image and Depth image fusion neural network model, and training the RGB image and Depth image fusion neural network model through a neural network model training framework to obtain a trained fusion neural network model; inputtingthe RGB-D image data set collected in the S1 into the trained fusion neural network model, and outputting a fused feature image. The implementation of complete functions can be only suitable for a single RGB-D camera to serve as an input sensor, and in practical application, the method has certain advantages in cost feasibility and simplicity of robot structural design.

Description

technical field [0001] The present invention relates to the fields of machine learning and pattern recognition, in particular to a learning method for robot obstacle avoidance behavior based on deep learning. Background technique [0002] SLAM (Simultaneous localization and mapping) is mainly used to solve the positioning and map construction problems of robots and other movements in unknown environments without prior knowledge of the environment. The SLAM method that uses a camera as a sensor is called Visual SLAM (VSLAM). As an emerging visual sensor, the RGB-D camera can simultaneously acquire the RGB image of the surrounding environment and the depth (Depth) information of each pixel. The RGB-D camera actively emits light to the object and receives the returned light through the infrared structure to measure the distance of the object from the camera. Compared with monocular and binocular cameras, there is no need to initialize the position, and it does not need to cal...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T7/73G06N3/08G06N3/04G06K9/62G05D1/02
CPCG06T7/73G06N3/08G05D1/0221G05D1/0246G06N3/045G06F18/213
Inventor 张宏潘雅灵何力管贻生周瑞浩
Owner FOSHAN NANHAI GUANGDONG TECH UNIV CNC EQUIP COOP INNOVATION INST
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products