Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-source remote sensing image classification method based on robust deep semantic segmentation network

A semantic segmentation and remote sensing image technology, applied in the field of robust learning of remote sensing semantic segmentation, can solve the problems of adverse effects of model learning and low accuracy of classification results

Active Publication Date: 2020-10-20
WUHAN UNIV
View PDF11 Cites 15 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The present invention mainly solves the problem of low accuracy of classification results existing in the process of classification of remote sensing images based on open source land cover classification data in the prior art, and provides a multi-source remote sensing image classification method based on robust deep semantic segmentation network (Robust Loss Function of Remote Sensing Imagery, RSRLF), which consists of a fault-tolerant loss function and adaptive category equalization weights, can effectively improve the classification accuracy of remote sensing images based on open source land cover classification datasets
Noisy labels can adversely affect model learning

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-source remote sensing image classification method based on robust deep semantic segmentation network
  • Multi-source remote sensing image classification method based on robust deep semantic segmentation network
  • Multi-source remote sensing image classification method based on robust deep semantic segmentation network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0038] In order to facilitate those of ordinary skill in the art to understand and implement the present invention, the present invention will be described in further detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the implementation examples described here are only used to illustrate and explain the present invention, and are not intended to limit this invention.

[0039] please see figure 1 , the present invention provides a multi-source remote sensing image classification method based on a robust deep semantic segmentation network, which consists of a fault-tolerant loss function and an adaptive category weight. Include the following steps:

[0040] Step 1: Deep semantic segmentation network training and object classification. The remote sensing image data set is used as the input data of the deep semantic segmentation network for training. The loss function in the deep semantic segmentation network is the RSRLF rob...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a multi-source remote sensing image classification method based on a robust deep semantic segmentation network (Robust Loss Function of Remote Sensing Imagery, RSRLF). The method is formed by recombining a fault tolerance loss function and an adaptive category weight. Remote sensing semantic segmentation is carried out based on the open source land cover classification dataset so that sample annotation work can be greatly reduced, but the open source land cover classification data set contains a certain annotation error sample. The method is advantaged in that the fault tolerance loss function can inhibit the model from learning the noise label, and avoids the over-fitting of the noise label by the model; and a category equalization constraint module can solve problems of large scale difference of globally distributed samples of ground object categories and inconsistent confusion degree among the categories; combination of the two can effectively solve a problem that noise labels and category samples are unbalanced when remote sensing semantic segmentation is carried out by using an open source land cover classification data set, and ground object classification precision based on an open source land cover classification data set is improved.

Description

technical field [0001] The invention belongs to the intersection field of remote sensing interpretation and artificial intelligence, and relates to a multi-source remote sensing image classification method based on a robust deep semantic segmentation network, specifically including fault-tolerant deep learning and self-adaptive class balance weight collaborative learning remote sensing semantic segmentation robust Great way to learn. Background technique [0002] Changes in land use and land cover directly affect climate and biodiversity worldwide. Rapid and effective acquisition of real and reliable land cover data can provide important support for global climate change research, land cover change detection, and ecological model establishment. With the continuous improvement of the spatial resolution of remote sensing images, the resolution of large-scale land cover classification mapping is also continuously improved. How to quickly and accurately obtain land cover class...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/34G06K9/62G06N3/04G06N3/08
CPCG06N3/084G06V20/13G06V10/267G06N3/045G06F18/214
Inventor 李彦胜黄隆扬肖锐张永军
Owner WUHAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products