Scene Recognition Method Based on Deconvolutional Deep Network Learning with Weights

A deep network and scene recognition technology, applied in the scene recognition field of deconvolution deep network learning, can solve the problems of hindering classification accuracy, indistinguishable runway and road scenes, and low classification accuracy.

Active Publication Date: 2020-11-17
XI'AN INST OF OPTICS & FINE MECHANICS - CHINESE ACAD OF SCI
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0008] Although the above four methods have achieved good results, they all ignore the characteristics of remote sensing images with complex ground objects and great similarities between different scene categories, resulting in low classification accuracy.
For example, there are not only airplanes in the airport scene, but also runways and terminal buildings. The runway and road scenes are difficult to distinguish, and the terminal building is easy to be confused with residential areas. Another example is the city and the dense activity room, which belong to different scenes. But even from the perspective of human vision, they have great similarities, thus hindering the further improvement of classification accuracy; in addition, most of these existing methods use artificially designed features, such as: scale-invariant feature rotation Descriptors, color histograms, etc., are less universal than features learned from the data itself

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Scene Recognition Method Based on Deconvolutional Deep Network Learning with Weights
  • Scene Recognition Method Based on Deconvolutional Deep Network Learning with Weights
  • Scene Recognition Method Based on Deconvolutional Deep Network Learning with Weights

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0062] see figure 1 , the present invention provides a kind of scene recognition method based on deconvolution depth network learning with weight, it comprises the following steps:

[0063] 1) Construct a weighted deconvolution deep network model, use the weighted deconvolution deep network model to learn the original input image, and obtain the feature maps on different scales of each image;

[0064] Build a deconvolutional deep network model with weights:

[0065]

[0066] where C(l) is the objective function of the weighted deconvolution deep network model, l is the number of layers of the weighted deconvolution deep network structure, λ l is the regular term parameter, y is the original input image, is the image obtained by downward reconstruction of the feature map of layer l, z k,l is the kth feature map of layer l, K l is the total number of feature maps in layer l, | | 1 is a sparse constraint on the feature map;

[0067] Such as figure 2 As shown, for the ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a scene identification method based on deconvolution deep network learning with a weight. The method comprises: (1), a weight-included deconvolution deep network model is constructed and learning is carried out on original input images by using the weight-included deconvolution deep network model, thereby obtaining feature graphs of the images on different scales; (2), feature sampling is carried out on the feature graphs obtained by learning at the step (1) by using a spatial pyramid model, thereby forming a feature vector expression of each image; and (3), the original input images are divided into a training set and a testing set and the feature vector expressions of the images are inputted into a classifier of a support vector machine respectively to complete classification training and testing, so that identification results of different scenes are obtained. According to the method provided by the invention, scene expressions with different scenes are established and thus the scene expressions become accurate and adequate, so that the scene classification precision is improved.

Description

technical field [0001] The invention belongs to the technical field of information processing, and relates to a scene recognition and understanding method of a remote sensing image, in particular to a scene recognition method based on weighted deconvolution deep network learning. Background technique [0002] With the development of my country's aerospace technology, more and more high-scoring satellites are launched into space to obtain data on the earth's surface for purposes such as disaster monitoring, agricultural production estimation, and military investigation. Usually, the data transmitted from the satellite to the ground has a large frame size. In order to make full use of these large amounts of large-scale and high-resolution remote sensing data, scene classification is a very important preprocessing method. important help. [0003] At present, the methods for scene classification of remote sensing images are mainly divided into four categories: [0004] One is ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/02
Inventor 袁媛卢孝强付敏
Owner XI'AN INST OF OPTICS & FINE MECHANICS - CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products