Dynamic environment information detection method based on semantic segmentation network and multi-view geometry

A semantic segmentation and dynamic environment technology, applied in image enhancement, image analysis, image data processing, etc., can solve the problems of low detection accuracy and robustness, improve accuracy and robustness, maintain high precision, reduce system Effects of storage and time overhead

Pending Publication Date: 2021-03-02
GUANGDONG POWER GRID CORP ZHAOQING POWER SUPPLY BUREAU
View PDF1 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] In order to overcome the problems of low detection accuracy and robustness in the above-mentioned prior art, the present invention provides a dynamic environment information detection method based on semantic segmentation network and multi-view geometry, improves detection accuracy and system robustness, and can also improve detection speed

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Dynamic environment information detection method based on semantic segmentation network and multi-view geometry
  • Dynamic environment information detection method based on semantic segmentation network and multi-view geometry
  • Dynamic environment information detection method based on semantic segmentation network and multi-view geometry

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0052] like Figure 1-3 Shown is an embodiment of a dynamic environment information detection method based on semantic segmentation network and multi-view geometry, including the following steps:

[0053] Step 1: Calibrate the camera to remove image distortion; acquire and input the environment image; calibrate the camera to remove image distortion. The specific steps are:

[0054] S1.1: First obtain the internal parameters of the camera, where the internal parameters include f x ,f y ,c x ,c y , normalize the three-dimensional coordinates (X, Y, Z) to homogeneous coordinates (x, y);

[0055] S1.2: Remove the influence of distortion on the image, where [k 1 ,k 2 ,k 3 ,p 1 ,p 2 ] is the distortion coefficient of the lens, artificially the distance from the point to the origin of the coordinate system:

[0056]

[0057] S1.3: Transfer the coordinates in the camera coordinate system to the pixel coordinate system:

[0058]

[0059] Step 2: Segment the input image...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a dynamic environment information detection method based on a semantic segmentation network and multi-view geometry, and the method comprises the following steps: calibratinga camera, and removing image distortion; acquiring and inputting an environment image; segmenting the input image through a semantic segmentation network to obtain masks of all objects and realize preliminary dynamic segmentation; ORB feature points being extracted from the input image, and then a descriptor being calculated; detecting and eliminating dynamic feature points by using a method of combining multi-view geometry and semantic information; matching the ORB feature points to obtain pose information of the robot; judging and inserting a key frame, and performing point cloud processingthrough a local mapping thread to obtain a sparse point cloud map; loop detection being used for optimizing the pose and correcting the drift error. According to the invention, the operation precisionand robustness of the RGBD SLAM system in a high-dynamic environment are improved. And meanwhile, by using the lightweight semantic segmentation network, the system storage and time expenditure can be reduced, and the real-time performance of the system is also considered while high precision is maintained.

Description

technical field [0001] The invention relates to the field of vision-based positioning and navigation in the autonomous inspection of UAVs, and more specifically, to a dynamic environment information detection method based on semantic segmentation network and multi-view geometry. Background technique [0002] In the process of UAV intelligent inspection, the UAV needs to determine the next operation independently according to the real-time information of the current environment. Therefore, real-time positioning and working environment mapping of UAVs are important links in the process of UAV intelligent inspection. Especially in the collaborative work of multi-UAVs arranged in a grid, the environment detected by each UAV is a dynamic scene (including moving objects that sometimes disappear), so it is necessary to Special algorithms need to be developed for dynamic scenes. [0003] Simultaneous Localization and Mapping (SLAM) is a method that can estimate the current positio...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/10G06T7/33G06T7/70
CPCG06T2207/10016G06T2207/20208G06T7/10G06T7/33G06T7/70
Inventor 孙仝游林辉胡峰陈政张谨立宋海龙黄达文王伟光梁铭聪黄志就何彧陈景尚谭子毅尤德柱区嘉亮罗鲜林
Owner GUANGDONG POWER GRID CORP ZHAOQING POWER SUPPLY BUREAU
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products