Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Visual SLAM method and system based on full convolutional neural network in dynamic scene

A convolutional neural network and dynamic scene technology, applied in the field of visual SLAM system based on full convolutional neural network, can solve the problems of positioning misalignment, deviation in camera pose calculation, etc., to improve accuracy and robustness, Improve the effect of positioning and mapping accuracy

Pending Publication Date: 2021-05-14
ZHEJIANG FORESTRY UNIVERSITY
View PDF0 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] Aiming at the technical problem in the prior art that slow-moving dynamic objects lead to deviations in the calculation of camera poses, resulting in inaccurate positioning of the entire visual SLAM system, the present invention provides a visual SLAM method and method based on a fully convolutional neural network in a dynamic scene. A visual SLAM system based on fully convolutional neural network in dynamic scenes. This method can effectively improve the accuracy and robustness of camera tracking, and improve the positioning and mapping accuracy of visual SLAM in dynamic scenes.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Visual SLAM method and system based on full convolutional neural network in dynamic scene
  • Visual SLAM method and system based on full convolutional neural network in dynamic scene
  • Visual SLAM method and system based on full convolutional neural network in dynamic scene

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0025] The specific implementation manners of the embodiments of the present invention will be described in detail below in conjunction with the accompanying drawings. It should be understood that the specific implementation manners described here are only used to illustrate and explain the embodiments of the present invention, and are not intended to limit the embodiments of the present invention.

[0026] It should be noted that, in the case of no conflict, the embodiments of the present invention and the features in the embodiments can be combined with each other.

[0027] In the present invention, unless stated to the contrary, the used orientation words such as "up, down, top, bottom" generally refer to the directions shown in the drawings or refer to the vertical, perpendicular or gravitational directions The terms used to describe the mutual positional relationship of the various components mentioned above.

[0028] The present invention will be described in detail bel...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a visual SLAM method and system based on a full convolutional neural network in a dynamic scene. The method comprises the following steps: acquiring an image data set; constructing a full convolutional neural network model according to the image data set; performing semantic segmentation on a monocular real-time image currently collected by a camera by using the full convolutional neural network model to obtain a semantic tag image; removing dynamic feature points of the monocular real-time image according to the semantic tag image to obtain static feature points of the monocular real-time image; and estimating the pose of the camera according to the static feature points. Through the method provided by the invention, a dynamic target can be accurately identified and semantic segmentation can be completed, the accuracy and robustness of camera tracking are effectively improved, and the positioning and mapping precision of visual SLAM in a dynamic scene is improved.

Description

technical field [0001] The present invention relates to the technical field of computer vision, in particular to a visual SLAM method based on a fully convolutional neural network in a dynamic scene and a visual SLAM system based on a fully convolutional neural network in a dynamic scene. Background technique [0002] Simultaneous Localization And Mapping (SLAM) refers to the process of a robot estimating its own pose and constructing an environmental map through its own sensors in an unfamiliar environment. It is a prerequisite for many robot application scenarios, such as path planning, Collision-free navigation, environment awareness, etc. Visual SLAM refers to the perception of using visual information to estimate the camera's own pose and build a three-dimensional map of the environment. [0003] In the prior art, the relative displacement of the two frames can be estimated according to the matching of feature points between two adjacent frames of the input image, so a...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T17/05G06T7/11G06T3/40G06K9/46G06K9/62G06N3/04G06N3/08
CPCG06T17/05G06T7/11G06T3/4007G06N3/08G06T2207/20016G06T2207/20081G06T2207/30244G06V10/44G06N3/045G06F18/241
Inventor 吕艳柳双磊倪益华倪忠进宋源普
Owner ZHEJIANG FORESTRY UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products