Data preprocessing method, map construction method, loop detection method and system

A data preprocessing, semantic map technology, applied in the field of navigation, can solve problems such as positioning difficulties

Active Publication Date: 2021-07-06
SHANDONG UNIV
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0010] In order to solve the above problems, the first aspect of the present disclosure provides a data preprocessing method, which eliminates dynamic obstacles in the scene according to attribute semantics, and only uses objects with static attributes as references for valid data. The map data can avoid the difficult problem of positioning caused by the disappearance of moving objects

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Data preprocessing method, map construction method, loop detection method and system
  • Data preprocessing method, map construction method, loop detection method and system
  • Data preprocessing method, map construction method, loop detection method and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0082] Such as figure 2 As shown, a kind of data preprocessing method of the present embodiment includes:

[0083] (1) Pre-training stage:

[0084] Such as figure 1 As shown, construct and train object recognition pre-training network, pixel segmentation network and attribute decision tree.

[0085] As an implementation, the object recognition pre-training network is a deep neural network, which is used to determine the object recognition granularity and content, and determine the type and position of the object in the image according to the object recognition granularity and content, and frame the object .

[0086] For example: the object recognition pre-training network uses the YOLO object recognition network.

[0087] YOLO (You Only Look Once: Unified, Real-Time Object Detection) is a target detection system based on a single neural network proposed by Joseph Redmon and Ali Farhadi in 2015. YOLO is a convolutional neural network that can predict multiple Box position...

Embodiment 2

[0110] Corresponding to Embodiment 1, this embodiment provides a data preprocessing system, including:

[0111] (1) pre-training module, which is used for:

[0112] Build and train object recognition pretrained networks, pixel segmentation networks, and attribute decision trees.

[0113] As an implementation, the object recognition pre-training network is a deep neural network, which is used to determine the object recognition granularity and content, and determine the type and position of the object in the image according to the object recognition granularity and content, and frame the object .

[0114] For example: the object recognition pre-training network uses the YOLO object recognition network.

[0115] YOLO (You Only Look Once: Unified, Real-Time Object Detection) is a target detection system based on a single neural network proposed by Joseph Redmon and Ali Farhadi in 2015. YOLO is a convolutional neural network that can predict multiple Box positions and categorie...

Embodiment 3

[0128] Such as image 3 As shown, this embodiment provides a method for constructing a spatially constrained map, including:

[0129] use as figure 2 The shown data preprocessing method obtains the corresponding spatial reference point of the same object in the image data before and after the movement of the robot;

[0130] The corresponding spatial reference points of the same object are converted into spatial coordinates, the motion results of the robot are calculated and consistent sampling is performed, the motion estimation of the robot is obtained, and the position scale constraint is formed;

[0131] Record the object category in the scene and its corresponding spatial point cloud position and object semantic feature description and store it in the map to construct a spatial constraint map.

[0132] For example: Between two time instants of data, the motion of the robot is calculated via a spatial reference point. Let the j-th spatial reference point of the i-th obj...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The disclosure provides a data preprocessing method, a map construction method, a loop detection method and a system. Among them, a data preprocessing method includes a pre-training stage: constructing and training an object recognition pre-training network, a pixel segmentation network, and an attribute decision tree; a pre-processing stage: inputting image data into the object recognition pre-training network to identify the object type and its location box; input the object type into the attribute decision tree to obtain the object attribute description; for the object with static object attributes, the data in the corresponding location box is segmented at the pixel level through the pixel segmentation network to obtain the object’s plane projection pixels and its corresponding spatial point cloud position; extract the plane pixel feature points of the object, obtain its corresponding spatial point cloud position as a part of the object attribute description, and record it as the spatial reference point.

Description

technical field [0001] The present disclosure belongs to the technical field of navigation, and in particular relates to a data preprocessing method, a map construction method, a loop closure detection method and a system. Background technique [0002] The statements in this section merely provide background information related to the present disclosure and do not necessarily constitute prior art. [0003] Simultaneous Localization and Mapping (SLAM) technology refers to placing the robot in an unknown environment, starting from an unknown position to create an incremental map of the environment, and using the created map for autonomous positioning and navigation. . SLAM technology can enable robots to achieve true autonomous navigation. [0004] The SLAM problem was first proposed in an article written by Cheeseman and Smith in 1985, in which the statistical principles describing geometric uncertainty and the relationship between features and features were created. These ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/62G05D1/02
CPCG05D1/02G06F18/214
Inventor 周风余万方陈科刘美珍顾潘龙庄文密于帮国杨志勇边钧健
Owner SHANDONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products