Visual simultaneous localization and mapping method based on depth convolution auto-encoder

A self-encoder and deep convolution technology, which is applied in the field of image processing, can solve the problems of difficult to obtain accurate true value of pose, large absolute error of GPS, and inability to use it.

Pending Publication Date: 2020-06-23
HARBIN INST OF TECH
View PDF9 Cites 34 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The true value of the camera pose can be obtained by using GPS, IMU or multi-sensor fusion, but the absolute error of GPS is relatively large and cannot be

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Visual simultaneous localization and mapping method based on depth convolution auto-encoder
  • Visual simultaneous localization and mapping method based on depth convolution auto-encoder
  • Visual simultaneous localization and mapping method based on depth convolution auto-encoder

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0127] The technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only a part of the embodiments of the present invention, not all the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of the present invention.

[0128] A method for simultaneous visual localization and map construction based on a deep convolutional autoencoder. The method includes the following steps:

[0129] Step 1: Select different training data for data preprocessing according to requirements; such as image flipping, compression distortion, local interception, Gaussian noise, etc.

[0130] Step 2: Establish a multi-task learning network based on a deep convolutional autoencoder; the netw...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a visual simultaneous localization and mapping (Visual-SLAM) method based on a depth convolution auto-encoder. The method comprises the steps of 1, performing data preprocessing on training data; 2, establishing a multi-task learning network; 3, taking three adjacent frames of binocular images in the image sequence as network input; 4, constructing a loss function; 5, training, verifying and testing the multi-task network; 6, the trained shared encoder network is used for loopback detection; 7, constructing a new Visual-SLAM system front end through the six steps, constructing a rear end of the Visual-SLAM system through pose graph optimization or factor graph optimization, and building a complete system, and 8, verifying the positioning accuracy and robustness. A depth convolution auto-encoder is used, a semi-supervised multi-task learning method is used to construct the front end of an SLAM system, depth estimation, camera pose estimation, optical flow estimation and semantic segmentation are included, and a feature map of a network is used to construct image representation to perform loop detection.

Description

Technical field [0001] The invention belongs to the technical field of image processing; in particular, it relates to a method for simultaneous visual positioning and map construction based on a deep convolutional autoencoder. Background technique [0002] Simultaneous positioning and map construction technology means that a mobile robot equipped with a specific sensor uses sensors to recover the three-dimensional information of the scene during the movement process without a priori conditions of the environment, and the key technology to locate its own pose is to realize the robot path planning, Basic requirements for autonomous navigation and other complex tasks. [0003] A complete visual simultaneous positioning and mapping (Visual-SLAM) system can theoretically be divided into two parts: the front end and the back end, the relationship diagram is as follows figure 1 Shown. The front-end part mainly includes visual odometer, local map construction and loop detection. The visu...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T7/73G06T7/80G06T9/00
CPCG06T7/73G06T7/80G06T9/004G06T9/002G06T2207/20081G06T2207/20084G06T2207/20228G06T2207/30208Y02T10/40
Inventor 叶东吕旭冬王硕
Owner HARBIN INST OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products