Hyperspectral Image Classification Method Based on Multi-class Generative Adversarial Network

A hyperspectral image and classification method technology, applied in the field of image processing and image classification, can solve the problems of too few numbers, network overfitting, and low classification accuracy, so as to improve classification accuracy, improve accuracy, and enhance The effect of the ability to extract features

Active Publication Date: 2021-09-03
XIDIAN UNIV
View PDF10 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although this method can reduce the influence of radiation error and geometric error, the disadvantage of this method is that only the spectral features of the pixel are extracted, and the spatial features of the neighborhood of the pixel are not extracted, resulting in low classification accuracy.
Although this method retains the nonlinear information of the samples, the disadvantage of this method is that the number of samples is too small relative to the number of network parameters, which leads to over-fitting of the network and low classification accuracy.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Hyperspectral Image Classification Method Based on Multi-class Generative Adversarial Network
  • Hyperspectral Image Classification Method Based on Multi-class Generative Adversarial Network
  • Hyperspectral Image Classification Method Based on Multi-class Generative Adversarial Network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0040] The present invention will be further described below in conjunction with the accompanying drawings.

[0041] combined with figure 1 Among them, the concrete steps that realize the present invention are as follows:

[0042] Step 1, input hyperspectral image.

[0043] Step 2, get the sample set.

[0044] Center each labeled pixel in the hyperspectral image to delineate a 27×27 pixel-sized spatial window.

[0045] All the pixels in each spatial window form a data cube.

[0046] Combine all data cubes into a sample set of hyperspectral images.

[0047] Step 3, generate training samples and test samples.

[0048] In the hyperspectral image sample set, 5% of the samples are randomly selected to form the hyperspectral image training samples; the remaining 95% of the samples are used to form the hyperspectral image test samples.

[0049] Step 4, build a multi-class generative confrontation network.

[0050] Build a generator consisting of a fully connected layer and 4 d...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a hyperspectral image classification method based on a multi-class generative confrontation network. The steps are as follows: (1) input hyperspectral images; (2) obtain sample sets; (3) generate training samples and test samples; (4) ) Build a multi-class generative confrontation network; (5) use the generator to generate samples; (6) use the discriminator to classify the training samples and the generated samples; (7) build the loss function of the generator and the discriminator; (8) alternate Train generator and discriminator; (9) Classify hyperspectral images. The present invention utilizes the built multi-class generative confrontation network to extract the spatial features of the pixel neighborhood, and simultaneously generates samples to increase the number of samples, enhances the feature extraction capability of the network, alleviates the problem of network over-fitting, and improves the hyperspectral image quality. classification accuracy.

Description

technical field [0001] The invention belongs to the technical field of image processing, and further relates to a hyperspectral image classification method based on multi-class generative adversarial networks (multi-class generative adversarial networks) in the technical field of image classification. The invention can be used to classify ground objects of hyperspectral images. Background technique [0002] The improvement of the spectral resolution of hyperspectral images provides more abundant information for classification, but also brings great challenges. Traditional methods currently used in hyperspectral image classification include support vector machines, decision trees, etc., and methods based on deep learning include stacked autoencoders, convolutional neural networks, etc. Deep learning requires a large amount of labeled data as training samples, and it is difficult to collect enough labeled data in hyperspectral images. Therefore, in the classification of hyper...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/00G06N3/04
CPCG06V20/194G06V20/13G06N3/045
Inventor 冯婕于海鹏焦李成张向荣王蓉芳尚荣华刘若辰刘红英
Owner XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products