Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

City scene semantic segmentation fine-grained boundary extraction method based on laser point cloud

A laser point cloud and semantic segmentation technology, applied in image analysis, image data processing, instruments, etc., can solve the problem of low accuracy of segmentation results

Pending Publication Date: 2021-05-18
NORTH CHINA UNIV OF WATER RESOURCES & ELECTRIC POWER
View PDF0 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0008] The purpose of this application is to provide a fine-grained boundary extraction method for urban scene semantic segmentation based on laser point cloud to solve the problem of low accuracy of semantic segmentation results in the prior art

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • City scene semantic segmentation fine-grained boundary extraction method based on laser point cloud
  • City scene semantic segmentation fine-grained boundary extraction method based on laser point cloud
  • City scene semantic segmentation fine-grained boundary extraction method based on laser point cloud

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0032] 1. Statistical analysis of 2D image datasets to select datasets;

[0033] Via column charts, line charts and scatter Figure 3 This paper compares and analyzes six benchmark datasets that are widely used in the field of urban scene semantic segmentation: SIFT-flow, PASCAL VOC2012, PASCAL-part, MS COCO, Cityscapes and PASCAL-Context. Statistics are performed on its training and validation sets, excluding the test set. First, the total number of categories and the total number of instances in the training set and test set in the six data sets are counted; then the number of categories contained in each picture, the number of instances contained in each picture, and the number of specific The number of pictures of the category (that is, how many pictures each specific type appears in) and the corresponding relationship between the number of categories and the number of instances. The statistical results of the number of categories in each image are as follows Figure 4 ...

Embodiment 2

[0064] like image 3 As shown, the difference from Embodiment 1 is that this embodiment uses an embedded conditional random field to perform fine-grained boundary extraction on the segmentation result of the 2D image.

[0065] The last network layer of the DeepLabV2 ResNet-101 model is the upsampling layer, which upsamples the rough score map output by the convolutional neural network, restores it to the original resolution, and then integrates the fully connected conditional random field. In order to add a network layer called Multiple Stage Mean Field Layer (MSMFL), the original image and the initial segmentation results of the network output are simultaneously input into MSMFL for maximum a posteriori inference, so that the labels of similar pixels and pixel neighbors are Consistency is maximized. The essence of the MSMFL layer is to transform the CRF inference algorithm. The transformation step is regarded as a layer-by-layer neural network, and then these layers are reco...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to an urban scene semantic segmentation fine-grained boundary extraction method based on a laser point cloud, and the method comprises the steps: designing a deep convolutional neural network model adaptive to an urban scene, training the model, carrying out the semantic segmentation of image data in obtained data through employing the trained model, and obtaining a fine-grained boundary of the urban scene. Obtaining a city scene preliminary semantic segmentation result based on the 2D image; carrying out fine boundary extraction through a post-processing conditional random field and an embedded conditional random field; and finally, calculating exterior orientation elements of the camera according to interior orientation elements of the camera through a direct transformation algorithm, obtaining overall mapping between the image and the corresponding laser point cloud, inputting the point cloud on the basis, and obtaining a refined semantic segmentation result based on the laser point cloud. The precision and the effect of semantic segmentation of the city scene can be improved.

Description

technical field [0001] The invention relates to a fine-grained boundary extraction method for urban scene semantic segmentation based on laser point cloud. Background technique [0002] In recent years, with the emergence of large-scale datasets, the reduction of computer hardware costs and the improvement of GPU parallel computing capabilities, Deep Convolutional Neural Networks (DCNNs) have become more widely used. Unlike traditional hand-crafted features, DCNNs can automatically learn rich feature representations from data, and thus perform well on many computer vision problems such as semantic segmentation. Fully Convolutional Neural Networks (FCNs) in DCNNs are a kind of DCNNs, and their performance in extracting features is particularly prominent. For the task of scene semantic segmentation, the global context information between different category labels affects its precise localization. However, FCNs do not have the ability to model the contextual relationship betw...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/10
CPCG06T7/10G06T2207/10028G06T2207/20081G06T2207/20084
Inventor 张蕊刘孟轩孟晓曼曾志远
Owner NORTH CHINA UNIV OF WATER RESOURCES & ELECTRIC POWER
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products