Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method for automatically annotating remote sensing images on basis of deep learning

A remote sensing image, automatic labeling technology, applied in special data processing applications, instruments, electrical digital data processing, etc.

Active Publication Date: 2014-05-28
ZHEJIANG UNIV
View PDF5 Cites 22 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Most of the traditional image annotation work is to analyze and understand the visual content of the image through the low-level visual features of the image, but most of this method has a problem: "semantic gap"
"Semantic gap" means that the high-level semantics of the image cannot be inferred only through the low-level visual features of the image. There is no suitable abstraction bridge between the low-level visual features of the image and the high-level semantics of the image, so the labeling effect is not ideal.
[0006] In order to overcome the problem of "semantic gap", people have gradually developed some methods to map the low-level visual features of the image to the high-level semantics of the image. Typical methods include the Probabilistic Latent Semantic Analysis (pLSA) model, latent Dirichlet Allocation (Latent Dirichlet Allocation, LDA) model and Author Topic Model (ATM) model, etc., but most of these methods only consider the color texture characteristics of the image, and do not consider the spectral characteristics of remote sensing images

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0053] The present invention will be further described in detail below in conjunction with specific examples.

[0054] An automatic labeling method for remote sensing images based on deep learning, including:

[0055] (1) Extract the underlying feature vector of the remote sensing image to be labeled to construct the visual feature vector of the corresponding remote sensing image;

[0056] In this implementation, the underlying feature vectors include average spectral reflectance feature vectors, color layout description vectors, color structure description vectors, scalable color description vectors, homogeneous texture description vectors, edge histogram description vectors, GIST feature vectors, and SIFT feature-based Visual bag-of-words vectors.

[0057] The average spectral reflectance eigenvector can be obtained directly from the remote sensing image data, which is different from ordinary images, and the spectral information has been collected when the satellite shoots ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a method for automatically annotating remote sensing images on the basis of deep learning. The method for automatically annotating the remote sensing images includes extracting visual feature vectors of the to-be-annotated remote sensing images; inputting the visual feature vectors into a DBM (deep Boltzmann machine) model to automatically annotate the to-be-annotated remote sensing images. The DBM model implemented in the method sequentially comprises a visible layer, a first hidden layer, a second hidden layer and a tag layer from bottom to top, and is acquired by means of training. The method for automatically annotating the remote sensing images has the advantages that the deep Boltzmann machine model implemented in the method comprises the two hidden layers (namely, the first hidden layer and the second hidden layer respectively), accordingly, the problem of 'semantic gaps' in image semantic annotation procedures can be effectively solved by the two hidden layers, and the integral annotation accuracy can be improved.

Description

technical field [0001] The invention relates to intelligent classification and retrieval technology of remote sensing images, in particular to an automatic tagging method for remote sensing images based on deep learning. Background technique [0002] Remote sensing images are one of the important data of spatial information, and are widely used in geological and flood disaster monitoring, agricultural and forest resources investigation, land use and urban planning, and military fields. With the development of my country's space science and earth observation technology, the data of remote sensing image data shows an exponential growth trend every year, and the effective management of massive remote sensing image data has become increasingly important. [0003] Remote sensing image annotation is one of the important contents of remote sensing image analysis and understanding. It extracts the underlying visual features of remote sensing images and learns the connection between ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F17/30
CPCG06F16/5866
Inventor 陈华钧黄梅龙江琳陶金火杨建华郑国轴吴朝晖
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products