Remote sensing scene classification method based on branch feature fusion convolutional network

A convolutional network and feature fusion technology, applied in scene recognition, biological neural network models, instruments, etc., can solve problems such as low complexity, high complexity, and complex spatial structure of shallow CNN models, and improve classification accuracy , the effect of great competitiveness

Active Publication Date: 2020-09-29
QIQIHAR UNIVERSITY
View PDF4 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The purpose of the present invention is to solve the problem that the existing remote sensing scene images have complex spatial structures, which are prone to large intra-class differences and inter-class similarities, and most existing deep CNN models with better classification performance have relatively low High complexity, the complexity of the shallow CNN model is low, but the classification accuracy cannot meet the practical application requirements in the field of remote sensing, so a remote sensing scene classification method based on branch feature fusion convolutional network is proposed

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Remote sensing scene classification method based on branch feature fusion convolutional network
  • Remote sensing scene classification method based on branch feature fusion convolutional network
  • Remote sensing scene classification method based on branch feature fusion convolutional network

Examples

Experimental program
Comparison scheme
Effect test

specific Embodiment approach 1

[0016] Specific implementation mode 1: The specific process of the remote sensing scene classification method based on branch feature fusion convolutional network in this implementation mode is as follows:

[0017] In recent years, various researchers have made many effective attempts in the research work of remote sensing scene classification, and have proposed a large number of different classification methods. These works can be mainly divided into three categories, one is based on handcrafted feature (Handcrafted Feature-Based Methods), one is based on unsupervised feature learning (Unsupervised Feature-Learning-Based Methods), and finally as the current The mainstream CNN-based deep feature learning (Deep CNN Feature-Learning-Based Methods) method [22]. The relevant work of these three types of methods will be briefly introduced below, and then the main contributions of the work of the present invention will be given.

[0018] A. Handcrafted Feature-Based Approaches

[...

specific Embodiment approach 2

[0033] Specific embodiment two: the difference between this embodiment and specific embodiment one is: the LCNN-BFF network model is established in the step one; specifically:

[0034] LCNN-BFF model consists of input layer, batch normalization layer, ReLU activation layer, Group 1, Group 2, Group3, Group 4, Group 5, Group 6, Group 7, Group 8, Group9;

[0035] Group 1 includes the first regular convolution layer, the first depth separable convolution layer, batch normalization layer, ReLU activation layer and the first maximum pooling layer;

[0036] The size of the convolution kernel of the first conventional convolution layer and the first depth-separable convolution layer is 3×3, the number of convolution output channels is 32, and the convolution stride is 1; the pooling size of the first maximum pooling layer is 2×2, and the pooling stride is 2;

[0037] The output data of the input layer is input to the first regular convolutional layer, the output data of the first reg...

specific Embodiment approach 3

[0086] Embodiment 3: This embodiment differs from Embodiment 1 or Embodiment 2 in that: the input layer inputs remote sensing scene image data with a size of 256×256×3.

[0087] Other steps and parameters are the same as those in Embodiment 1 or Embodiment 2.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a remote sensing scene classification method based on a branch feature fusion convolutional network, and relates to a remote sensing scene classification method. The objectiveof the invention is to solve the problem that an existing remote sensing scene image has a complex space structure. According to the method, the problems that most existing deep CNN models with good classification performance have high complexity, shallow CNN models have low complexity, but the classification accuracy cannot meet the requirements of practical application in the remote sensing field are solved. The method comprises the following steps: 1, establishing an LCNN-BFF network model; 2, training a network model by adopting the data set; 3, verifying the accuracy of the pre-trained model by adopting the test set, obtaining the trained model if the accuracy meets the requirement, otherwise, continuing to train the model until the accuracy meets the requirement; and 4, adopting thetrained model to classify the remote sensing scenes to be identified. The method is applied to the field of remote sensing scene classification.

Description

technical field [0001] The invention relates to a remote sensing scene classification method. Background technique [0002] Related research work in the field of remote sensing has attracted more and more researchers' attention and attention, among which remote sensing scene classification is a task that assigns specified labels to random scenes based on their image content [1-3] ([1] Lu X ,Zheng X,YuanY.Remote sensing scene classification by unsupervised representation learning[J].IEEE Transactions on Geoscience and Remote Sensing,2017,55(9):5148-5157.[2]Li E,Xia J,Du P,et al .Integrating multilayer features of convolutional neural networks for remote sensing scene classification[J].IEEE Transactions on Geoscience and Remote Sensing,2017,55(10):5653-5665.[3]Gong Z,Zhong P,Yu Y,et al.Diversity -promoting deep structural metric learning for remote sensing scene classification[J].IEEE Transactions on Geoscience and Remote Sensing,2017,56(1):371-390.). Its research results ar...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/04
CPCG06V20/13G06N3/045G06F18/214G06F18/241G06F18/253
Inventor 石翠萍王涛刘超苗凤娟
Owner QIQIHAR UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products