Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Remote Sensing Image Classification Method Based on Deep Fusion Convolutional Neural Network

A convolutional neural network and remote sensing image technology, applied in the field of image classification, can solve the problems of low classification accuracy, single or redundant remote sensing image feature extraction, etc., to improve feature expression ability, avoid over-fitting, and ensure robustness Effect

Active Publication Date: 2022-03-08
CHENGDU UNIVERSITY OF TECHNOLOGY
View PDF8 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The purpose of the present invention is to provide a solution to the above problems, overcome the defects of low classification accuracy caused by the single or redundant feature extraction of remote sensing images in the prior art, and obtain the high-level feature expression ability of the target by establishing a new network model, thereby improving remote sensing images Classification accuracy of remote sensing image classification method based on deep fusion convolutional neural network

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Remote Sensing Image Classification Method Based on Deep Fusion Convolutional Neural Network
  • Remote Sensing Image Classification Method Based on Deep Fusion Convolutional Neural Network
  • Remote Sensing Image Classification Method Based on Deep Fusion Convolutional Neural Network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0046] Embodiment 1: see Figure 1 to Figure 2 , a remote sensing image classification method based on a deep fusion convolutional neural network, comprising the following steps:

[0047] (1) Construct the original remote sensing image into a data set, preprocess the original remote sensing image, divide the preprocessed image into training set, test set and verification set, add category labels to the images of different categories in the training set, and then Perform data augmentation on the training data to obtain the training data;

[0048] (2) Construct a deep fusion convolutional neural network;

[0049] The deep fusion convolutional neural network includes an encoder-decoder model, a VGG16 model, a fusion part, a flat layer and a fully connected layer, and the encoder-decoder model includes an encoding part and a decoding part;

[0050] The VGG16 model is used to extract the deep features of the image;

[0051] The coding part includes a multi-layer convolutional la...

Embodiment 2

[0060] Example 2: see Figure 1 to Figure 2 , this embodiment is further improved and defined on the basis of embodiment 1. Specifically:

[0061] The preprocessing in the step (1) is to divide each pixel value of the original remote sensing image by 255 for normalization, and the data augmentation is to perform horizontal mirroring, rotation and scaling operations on the images in the training set .

[0062] In the upsampling layer, the upsampling adopts the nearest neighbor method to increase the image size.

[0063] In step (3), the cross-entropy loss function J(W,b) is:

[0064]

[0065]

[0066] Among them, p i is the normalized probability output of the softmax function to the i-th sample in the fully connected layer, K is the number of categories, i is the i-th sample, j is the j-th sample, e is the base of the exponential function, x i is the output value of the fully connected layer for the i-th sample, x j is the output value of the fully connected layer ...

Embodiment 3

[0067] Embodiment 3: see Figures 1 to 2 , this embodiment is further improved and defined on the basis of embodiment 2.

[0068] The preprocessing in the step (1) is to divide each pixel value of the original remote sensing image by 255 for normalization processing. This preprocessing method provides a more effective data storage and processing method, while improving training The convergence rate of the model.

[0069] The data augmentation is: performing horizontal mirroring, rotation and scaling operations on the images in the training set. The specific methods of data augmentation include: (1) Horizontal mirroring, which horizontally flips the training data set in terms of geometric angle; (2) Rotation, image rotation technology can learn rotation invariant features during network training, and the target may have different poses , the rotation solves the problem of less object poses in the training samples. In this technique, the rotation degree is set to 10; (3) scal...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a remote sensing image classification method based on a deeply fused convolutional neural network, which includes constructing an original remote sensing image into a data set, preprocessing the original remote sensing image, and dividing the preprocessed image into a training set, a test set, and a training set. set and verification set, and augment the training set; build a deep fusion convolutional neural network; train to obtain the optimal network model; use the optimal network model to classify the measured remote sensing images. The present invention provides a new classification method, constructs a new deep fusion convolutional neural network, and combines the improved encoder-decoder model with the VGG16 model, which combines the deep features and middle features of remote sensing images, In order to effectively overcome the defect of single or redundant remote sensing image feature extraction in the prior art, which leads to low classification accuracy, the present invention obtains the high-level feature expression ability of the target by establishing a new network model, thereby improving the classification accuracy of remote sensing images.

Description

technical field [0001] The invention relates to an image classification method, in particular to a remote sensing image classification method based on a deep fusion convolutional neural network. Background technique [0002] In recent years, with the rapid development of remote sensing imaging technology, a large number of remote sensing images enable us to explore the earth's surface in more detail. Among them, remote sensing image scene classification is to classify the sub-regions extracted from remote sensing images of multiple ground objects, which provides guidance for basic work such as urban planning and land resource management. [0003] Similar to the traditional image classification process, the steps of remote sensing image classification include image preprocessing, feature extraction, and classifier classification. The most critical step in remote sensing image classification technology is the extraction of target features. The traditional pixel-based feature...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06V10/764G06V10/80G06V10/82G06V10/20G06K9/62G06N3/04G06N3/08
CPCG06N3/084G06N3/045G06F18/2414G06F18/253G06F18/214
Inventor 郭勇张晓霞张霞
Owner CHENGDU UNIVERSITY OF TECHNOLOGY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products