Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Remote sensing scene image classification method based on multi-level dense feature fusion

A scene image and classification method technology, applied in the field of remote sensing scene image classification, can solve the problem of low classification accuracy of hyperspectral images, and achieve the effect of improving generalization ability and reducing complexity

Pending Publication Date: 2021-11-26
QIQIHAR UNIVERSITY
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] The purpose of the present invention is to solve the problem of low classification accuracy of hyperspectral images due to the high-dimensional characteristics of hyperspectral images and the situation of small training samples in the process of existing hyperspectral image extraction, and proposes a method based on dual-branch spectrum multi- Hyperspectral Image Classification with Scale Attention Networks

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Remote sensing scene image classification method based on multi-level dense feature fusion
  • Remote sensing scene image classification method based on multi-level dense feature fusion
  • Remote sensing scene image classification method based on multi-level dense feature fusion

Examples

Experimental program
Comparison scheme
Effect test

specific Embodiment approach 1

[0021] Specific implementation mode 1: In this implementation mode, the remote sensing scene image classification method based on dense fusion of multi-level features The specific process is as follows:

[0022] Step 1, collect hyperspectral image data set X and corresponding label vector data set Y;

[0023] Step 2. Establish a lightweight convolutional neural network BMDF-LCNN based on dense fusion of dual-branch multi-level features;

[0024] Step 3. Input the hyperspectral image dataset X and the corresponding label vector dataset Y into the established lightweight convolutional neural network BMDF-LCNN based on the dense fusion of dual-branch multi-level features, and use the Momentum algorithm for iterative optimization , get the optimal network BMDF-LCNN;

[0025] Step 4: Input the hyperspectral image to be tested into the optimal network BMDF-LCNN to predict the classification result.

specific Embodiment approach 2

[0026] Specific embodiment two: the difference between this embodiment and specific embodiment one is that in said step two, a lightweight convolutional neural network BMDF-LCNN based on the dense fusion of dual-branch multi-level features is established; the specific process is:

[0027] The lightweight convolutional neural network BMDF-LCNN based on the dense fusion of dual-branch multi-level features includes the input layer, the first group Group1, the second group2, the third group3, the fourth group4, the fifth group5, the sixth Group Group6, seventh group Group7, eighth group Group8, ninth group Group9 and output classification layer.

[0028] Other steps and parameters are the same as those in Embodiment 1.

specific Embodiment approach 3

[0029] Specific embodiment three: the difference between this embodiment and specific embodiment one or two is that the connection relationship of the lightweight convolutional neural network BMDF-LCNN based on the dense fusion of dual-branch multi-level features is:

[0030] The output end of the input layer is connected to the first group Group1, the output end of the first group Group1 is connected to the second group Group2, the output end of the second group Group2 is connected to the third group Group3, and the output end of the third group Group3 is connected to the fourth group Group4, The output terminal of the fourth group Group4 is connected to the fifth group Group5, the output terminal of the fifth group Group5 is connected to the sixth group Group6, the output terminal of the sixth group Group6 is connected to the seventh group Group7, and the output terminal of the seventh group Group7 is connected to the eighth group Group8, the output end of the eighth group Gr...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a remote sensing scene image classification method based on multi-level dense feature fusion, and relates to a remote sensing scene image classification method. The objective of the invention is to solve the problem of low classification accuracy of hyperspectral images caused by high-dimensional characteristics of the hyperspectral images and small training samples in existing hyperspectral image extraction processes. The method comprises the following steps of: step 1, acquiring a hyperspectral image data set X and a corresponding label vector data set Y; step 2, establishing a lightweight convolutional neural network BMDF-LCNN based on dual-branch multi-level dense feature fusion; step 3, acquiring an optimal network BMDF-LCNN; and step 4, inputting a to-be-measured hyperspectral image into the optimal network BMDF-LCNN to perform classification result prediction. The method is applied to the field of image classification.

Description

technical field [0001] The invention relates to a remote sensing scene image classification method. Background technique [0002] At present, remote sensing images with high resolution are applied to many fields. Such as remote sensing scene classification [1] , hyperspectral image classification [2] , change detection [3-4] , geographic imagery, and land use classification [6-7] Wait. However, the complex spatial patterns and geometric structures of remote sensing images bring great difficulties to image classification. Therefore, it is particularly important to effectively understand the semantic content of remote sensing images. The purpose of this research is to find a simple and efficient lightweight network model that can accurately understand the semantic content of remote sensing images and correctly determine which scene category it belongs to. [0003] In order to effectively extract image features, researchers have proposed many methods. Initially, feature...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06N3/045G06F18/25G06F18/241G06F18/214
Inventor 石翠萍张鑫磊王天毅
Owner QIQIHAR UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products