Remote sensing image classification method based on deep fusion convolutional neural network

A convolutional neural network and remote sensing image technology, applied in the field of image classification, can solve the problems of low classification accuracy, single or redundant remote sensing image feature extraction, etc., to improve feature expression ability, avoid over-fitting, and deep network Effect

Active Publication Date: 2020-09-01
CHENGDU UNIVERSITY OF TECHNOLOGY
View PDF8 Cites 29 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The purpose of the present invention is to provide a solution to the above problems, overcome the defects of low classification accuracy caused by the single or redundant feature extraction of remote sensing images in the prior art, and obtain the high-level feature expression ability of the target by establishing a new network model, thereby improving remote sensing images Classification accuracy of remote sensing image classification method based on deep fusion convolutional neural network

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Remote sensing image classification method based on deep fusion convolutional neural network
  • Remote sensing image classification method based on deep fusion convolutional neural network
  • Remote sensing image classification method based on deep fusion convolutional neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0046] Embodiment 1: see Figure 1 to Figure 2 , a remote sensing image classification method based on a deep fusion convolutional neural network, comprising the following steps:

[0047] (1) Construct the original remote sensing image into a data set, preprocess the original remote sensing image, divide the preprocessed image into training set, test set and verification set, add category labels to the images of different categories in the training set, and then Perform data augmentation on the training data to obtain the training data;

[0048] (2) Construct a deep fusion convolutional neural network;

[0049] The deep fusion convolutional neural network includes an encoder-decoder model, a VGG16 model, a fusion part, a flat layer and a fully connected layer, and the encoder-decoder model includes an encoding part and a decoding part;

[0050] The VGG16 model is used to extract the deep features of the image;

[0051] The coding part includes a multi-layer convolutional la...

Embodiment 2

[0060] Example 2: see Figure 1 to Figure 2 , this embodiment is further improved and defined on the basis of embodiment 1. Specifically:

[0061] The preprocessing in the step (1) is to divide each pixel value of the original remote sensing image by 255 for normalization, and the data augmentation is to perform horizontal mirroring, rotation and scaling operations on the images in the training set .

[0062] In the upsampling layer, the upsampling adopts the nearest neighbor method to increase the image size.

[0063] In step (3), the cross-entropy loss function J(W,b) is:

[0064]

[0065]

[0066] Among them, p i is the normalized probability output of the softmax function to the i-th sample in the fully connected layer, K is the number of categories, i is the i-th sample, j is the j-th sample, e is the base of the exponential function, x i is the output value of the fully connected layer for the i-th sample, x j is the output value of the fully connected layer ...

Embodiment 3

[0067] Embodiment 3: see Figures 1 to 2 , this embodiment is further improved and defined on the basis of embodiment 2.

[0068] The preprocessing in the step (1) is to divide each pixel value of the original remote sensing image by 255 for normalization processing. This preprocessing method provides a more effective data storage and processing method, while improving training The convergence rate of the model.

[0069] The data augmentation is: performing horizontal mirroring, rotation and scaling operations on the images in the training set. The specific methods of data augmentation include: (1) Horizontal mirroring, which horizontally flips the training data set in terms of geometric angle; (2) Rotation, image rotation technology can learn rotation invariant features during network training, and the target may have different poses , the rotation solves the problem of less object poses in the training samples. In this technique, the rotation degree is set to 10; (3) scal...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a remote sensing image classification method based on a deep fusion convolutional neural network, and the method comprises the steps: constructing an original remote sensing image into a data set, carrying out the preprocessing of the original remote sensing image, dividing the preprocessed image into a training set, a test set and a verification set, and carrying out the data augmentation of the training set; constructing a deep fusion convolutional neural network; training to obtain an optimal network model; and classifying the actually measured remote sensing imagesby using the optimal network model. The invention provides a new classification method. A new deep fusion convolutional neural network is constructed; an improved encoder-decoder model is combined with a VGG16 model to obtain a VGG16 model; the model fuses the deep features and the middle-layer features of the remote sensing image, so that the defect of low classification precision caused by single or redundant feature extraction of the remote sensing image in the prior art is effectively overcome, the advanced feature expression capability of the target is obtained by establishing the novel network model, and the classification accuracy of the remote sensing image is improved.

Description

technical field [0001] The invention relates to an image classification method, in particular to a remote sensing image classification method based on a deep fusion convolutional neural network. Background technique [0002] In recent years, with the rapid development of remote sensing imaging technology, a large number of remote sensing images enable us to explore the earth's surface in more detail. Among them, remote sensing image scene classification is to classify the sub-regions extracted from remote sensing images of multiple ground objects, which provides guidance for basic work such as urban planning and land resource management. [0003] Similar to the traditional image classification process, the steps of remote sensing image classification include image preprocessing, feature extraction, and classifier classification. The most critical step in remote sensing image classification technology is the extraction of target features. The traditional pixel-based feature...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62G06N3/04G06N3/08
CPCG06N3/084G06N3/045G06F18/2414G06F18/253G06F18/214
Inventor 郭勇张晓霞张霞
Owner CHENGDU UNIVERSITY OF TECHNOLOGY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products