Dynamic scene visual positioning method based on image semantic segmentation

A technology for semantic segmentation and dynamic scenes, which is applied in image analysis, image data processing, instruments, etc., can solve the problem of positioning accuracy and robustness to be improved, and achieves improved positioning accuracy and robustness, improved positioning accuracy, and improved positioning accuracy. Excellent results

Active Publication Date: 2019-08-02
SOUTHEAST UNIV
View PDF4 Cites 24 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the positioning accuracy and robustness of tradi

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Dynamic scene visual positioning method based on image semantic segmentation
  • Dynamic scene visual positioning method based on image semantic segmentation
  • Dynamic scene visual positioning method based on image semantic segmentation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0081] The present invention is evaluated using Frankfurt monocular image sequences, which are part of the Cityscapes dataset. The entire Frankfurt sequence provides more than 100,000 frames of outdoor environment images and provides ground-truth localization results. Divide the sequence into several smaller sequences containing 1300-2500 frames of sequences of dynamic objects such as driving cars or pedestrians. The configuration of the experimental platform is: Intel XeonE5-2690V4; 128GB RAM; NVIDIA TitanVGPU.

[0082] The sequence isolated from the original Frankfurt sequence is as follows:

[0083] Seq.01:frankfurt_000001_054140_leftImg8bit.png-frankfurt_000001_056555_leftImg8bit.png

[0084] Seq.02:frankfurt_000001_012745_leftImg8bit.png-frankfurt_000001_014100_leftImg8bit.png

[0085] Seq.03:frankfurt_000001_003311_leftImg8bit.png-frankfurt_000001_005555_leftImg8bit.png

[0086] Seq.04:frankfurt_000001_010580_leftImg8bit.png-frankfurt_000001_012739_leftImg8bit.png

...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a dynamic scene visual positioning method based on image semantic segmentation, and belongs to the field of SLAM (Simultaneous Localization and Mapping). The method comprises the following steps: firstly, segmenting a dynamic object in an original image by adopting a supervised learning mode in deep learning to obtain a semantic image; on the basis, extracting ORB feature points from the original image, and removing the feature points of the dynamic object according to the semantic image; and finally, based on the removed feature points, positioning and tracking the camera motion by adopting a monocular SLAM method based on point features. A positioning result shows that compared with a traditional method, the method disclosed by the invention has the advantage thatthe positioning precision in a dynamic scene is improved by 13% to 30%.

Description

technical field [0001] The invention relates to the application of deep learning in visual SLAM, and belongs to the field of SLAM (Simultaneous Localization and Mapping, simultaneous positioning and mapping). Background technique [0002] Simultaneous localization and mapping (SLAM) is a key technology for robots to operate autonomously in unknown environments. Based on the environmental data detected by the robot's external sensors, SLAM constructs the robot's surrounding environment map, and at the same time gives the robot's position in the environment map. Compared with ranging instruments such as radar and sonar, vision sensors have the characteristics of small size, low power consumption, and rich information collection, and can provide rich texture information in the external environment. Therefore, visual SLAM has become a hotspot of current research and has been applied in autonomous navigation, VR / AR and other fields. [0003] Traditional point feature-based visu...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T7/73G06T7/10G06N3/04
CPCG06T7/73G06T7/10G06N3/045
Inventor 潘树国盛超曾攀黄砺枭赵涛王帅高旺
Owner SOUTHEAST UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products