Unlock instant, AI-driven research and patent intelligence for your innovation.

High-resolution remote sensing image semantic segmentation method based on model depth integration

A remote sensing image and semantic segmentation technology, which is applied to biological neural network models, character and pattern recognition, instruments, etc., can solve the problems of unable to capture the scale features of ground objects, and the target scale span of remote sensing images is large, so as to reduce the training time and training time. The effect of increasing the difficulty and ensuring the accuracy of the model

Pending Publication Date: 2021-03-26
CENT SOUTH UNIV
View PDF0 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The purpose of the present invention is to solve the problem that the deep full convolutional network with a single depth structure cannot capture the large-span feature scale features in remote sensing images, introduce the idea of ​​integrated learning, and propose a high-resolution remote sensing image semantics based on model deep integration The segmentation method integrates the scale information extracted by different depth networks, focuses on the information of small-scale objects through the shallow network, and focuses on the information of large-scale objects through the deep network, and solves the problem of large scale span of remote sensing images

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • High-resolution remote sensing image semantic segmentation method based on model depth integration
  • High-resolution remote sensing image semantic segmentation method based on model depth integration
  • High-resolution remote sensing image semantic segmentation method based on model depth integration

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0066] In order to verify the effectiveness of the ED-FNet network architecture proposed in this chapter, in this example, experiments are carried out on the ISPRSVaihingen and ISPRS Potsdam datasets.

[0067] The Vaihingen dataset 5 consists of 33 aerial images collected in a 1.38km2 area of ​​Vaihingen with a spatial resolution of 9cm. The average size of each image is 2494 × 2064 pixels, and each image has three bands corresponding to near infrared (NIR), red (R) and green (G) wavelengths. This dataset also specifically provides DSM also as supplementary data, which represents the surface height of all objects in the image. In these images, there are 16 manually annotated pixel-level labels, and each pixel is classified into one of 6 land cover classes. Eleven images in this data set are used for training, and the remaining five images (image id: 11, 15, 28, 30, 34) are used to test the model in this embodiment.

[0068]The Potsdam dataset consists of 38 high-resolution a...

Embodiment 2

[0092] In this embodiment, an adaptive fusion module is proposed to solve the problem of how to fuse multiple output results of the model. In order to verify the effectiveness of the AMF module, an ablation experiment is designed, and the impact of the depth-separable convolution module on the model performance will also be tested. In this part of the experiment, U-Net is selected as the skeleton network structure, and AMF means adding an adaptive fusion module. In the last comparison model experiment, multiple loss constraints in DE_UNet were removed, and only the loss of the last layer of the network was retained to verify the importance of different depth models corresponding to loss constraints.

[0093] The experimental results are shown in Table 2. Compared with the original U-Net, the DE_UNet+AFM model proposed by the present invention improves OA / AF / mIoU by 1.23% / 1.8% / 2.5%. If the adaptive fusion module (AFM) is removed, the average weighted fusion method is used Over...

Embodiment 3

[0098]In order to further verify the effectiveness of the network in the present invention, an experiment was carried out on the Potsdam dataset. Compared with the Vaihingen dataset, the Potsdam dataset has a larger picture coverage and a higher pixel resolution of the picture. The Potsdam dataset in a single image has more local texture information and spatial multi-scale information, the background is more complex, and its segmentation is more difficult. The accuracy of the same model on the Potsdam dataset is often lower than that on the Vaihingen dataset. The specific numerical results are shown in Table 3. Table 3 shows the semantic segmentation results on the Potsdam dataset. The accuracy metric for each class is IoU. The best results for vgg and resnet at different depths are marked in gray.

[0099] It can be seen from the table that DE_UNet has increased by 1.13% / 0.87% / 1.25% on OA / mF1 / mIoU compared to U-Net, and DE_PNet has increased by 1.06% / 0.9% / 0.9% on OA / mF1 / mIo...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a high-resolution remote sensing image semantic segmentation method based on model depth integration, and the method comprises the steps: designing an end-to-end learning framework based on integrated depth full convolution, carrying out combined learning of multi-scale and multi-space structure semantic information in a remote sensing image through the fusion of differentdepth full convolution networks, and obtaining the high-resolution remote sensing image semantic segmentation result. Meanwhile, an adaptive fusion module and a depth separable convolution module areprovided, the adaptive fusion module can learn the weights of different depth network fusion, the depth separable convolution module can reduce the parameter quantity of the model on the premise of ensuring the model precision, and the problem that the training time and the training difficulty are increased due to multiple parameters caused by multiple models is solved.

Description

technical field [0001] The invention relates to the technical field of semantic segmentation of remote sensing images, in particular to a method for semantic segmentation of high-resolution remote sensing images based on deep model integration. Background technique [0002] High-resolution remote sensing images have a bird's-eye view and can repeatedly acquire large-area data. It is widely used in many fields, such as: land surveillance business, land cover mapping, important ground facility detection, smart city construction, traffic planning, etc. Image segmentation, as a basic image analysis technique, aims to segment an image into a set of disjoint regions that are divided according to specific attributes such as texture, color, shape, size, and grayscale. Traditional segmentation methods classify images based on different spatial units, including pixels, moving windows, objects, and scenes. However, since traditional methods only involve low-level features in the spect...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/34G06K9/62G06N3/04
CPCG06V20/13G06V10/267G06N3/045G06F18/214Y02T10/40
Inventor 陈力崔振琦彭剑黄浩哲
Owner CENT SOUTH UNIV