Remote sensing image semantic segmentation method based on self-supervised contrast learning

A technology for semantic segmentation and remote sensing images, applied in neural learning methods, instruments, biological neural network models, etc., can solve problems such as category impurity semantic segmentation tasks, and achieve good results

Active Publication Date: 2021-06-22
CENT SOUTH UNIV +1
View PDF15 Cites 18 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0008] In view of this, the purpose of the present invention is to solve the problem of directly learning features from unlabeled images to help downstream semantic segmentation tasks with only a small amount of annotations. In this paper, the contrastive self-supervised learning is applied to the remote sensing semantic segmentation data set, a global style and local matching contrastive learning framework is proposed, and a remote sensing image semantic segmentation method based on self-supervised contrastive learning is formed.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Remote sensing image semantic segmentation method based on self-supervised contrast learning
  • Remote sensing image semantic segmentation method based on self-supervised contrast learning
  • Remote sensing image semantic segmentation method based on self-supervised contrast learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0053] Next, the technical solutions in the embodiments of the present invention will be apparent from the embodiment of the present invention, and it is clearly described, and it is understood that the described embodiments are merely embodiments of the present invention, not all of the embodiments.

[0054] Such as figure 1 As shown, a remote sensing image semantic segmentation method based on self-supervision comparison learning, including the following steps:

[0055] Step 1, build the DeepLab V3 + network model;

[0056] Step 2, the encoder of the network model is pre-trained with no label data;

[0057] Step 3, after the pre-training is completed, the network model has a supervisory semantics segmentation training in the labeling sample;

[0058] Step 4, use the network model with the supervisor semantic segmentation training to semantically segment the remote sensing image.

[0059] Comparative learning is to construct a characterization by learning two things, and its core...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a remote sensing image semantic segmentation method based on self-supervised contrast learning. The method comprises the following steps: constructing a semantic segmentation network model (such as Deeplab v3 +); carrying out pre-training on an encoder of the network model by adopting unlabeled data; after the pre-training is completed, performing supervised semantic segmentation training on the network model on a labeled sample; carrying out semantic segmentation on the remote sensing image by adopting the network model subjected to supervised semantic segmentation training, wherein in the pre-training process, a mode of combining global style comparison and local matching comparison is adopted to carry out comparative learning. Compared with the prior art, comparison self-supervised learning is applied to a remote sensing semantic segmentation data set, a global style and local matching comparison learning framework is provided, and the remote sensing image semantic segmentation method based on self-supervised comparison learning is formed, so that the semantic segmentation method is wider in application range and better in segmentation effect.

Description

Technical field [0001] The present invention relates to the field of remote sensing image semantics, and more particularly to remote sensing image semantic segmentation methods based on self-supervision comparison learning. Background technique [0002] With the development of remote sensing technology, the acquisition of high-score-sensing images is increasingly easy, remote sensing images have become more and more extensive applications in urban planning, disaster monitoring, environmental protection, and transportation. Extraction and identification of information in remote sensing images is usually the basis of all applications, and semantic segmentation is a technique for identifying classification of full-image, so it has always been important and challenging in the field of remote sensing. [0003] In recent years, the development of deep learning technology has achieved impressive results in semantic segmentation, and is increasingly widely used in global surface coverage...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/34G06K9/46G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V10/443G06V10/267G06V10/757G06N3/045G06F18/2155
Inventor 李海峰李益李朋龙丁忆马泽忠张泽烈胡艳肖禾陶超
Owner CENT SOUTH UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products