Unlock instant, AI-driven research and patent intelligence for your innovation.

Multi-modal remote sensing image matching method based on convolutional neural network feature map

A convolutional neural network and remote sensing image technology, applied in the field of multimodal remote sensing image matching based on convolutional neural network feature maps

Active Publication Date: 2020-07-14
SOUTHWEST JIAOTONG UNIV
View PDF12 Cites 16 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0010] In order to solve the above-mentioned problems existing in the prior art, the purpose of the present invention is to provide a method that can be used in multi-modal remote sensing images with significant nonlinear grayscale changes and geometric deformations without any image prior information and manual intervention. Multimodal Remote Sensing Image Matching Method Based on Convolutional Neural Network Feature Map to Obtain Reliable Feature Matching Results

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-modal remote sensing image matching method based on convolutional neural network feature map
  • Multi-modal remote sensing image matching method based on convolutional neural network feature map
  • Multi-modal remote sensing image matching method based on convolutional neural network feature map

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0066] Embodiments of the present invention are described in detail below, examples of which are shown in the drawings, wherein the same or similar reference numerals designate the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the figures are exemplary and are intended to explain the present invention and should not be construed as limiting the present invention.

[0067] In the description of the present invention, it should be understood that the terms "first" and "second" are used for description purposes only, and cannot be interpreted as indicating or implying relative importance or implicitly indicating the quantity of indicated technical features. Thus, a feature defined as "first" and "second" may explicitly or implicitly include one or more of these features. In the description of the present invention, "plurality" means two or more, unless otherwise specifically defined.

[0068]...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a multi-modal remote sensing image matching method based on a convolutional neural network feature map. The method comprises the following steps: 1) constructing a similaritymeasurement neural network FSNet for nonlinear gray scale change of an image; 2) constructing a training sample set to train the FSNet, and forming a feature extraction network CSNet by the convolution module of the FSNet; 3) performing down-sampling on a reference image and a search image in the multi-modal remote sensing image to be matched, and extracting the depth feature map of the down-sampled image by using CSNet; 4) based on a homography transformation model H between the depth feature map estimation and the depth feature map estimation, performing geometric correction by using an H pair to obtain a corrected search image; 5) performing feature matching by using an FSNet pair; and 6) inversely calculating the coordinates of the matching points by utilizing inverse transformation H-1 to obtain a final matching result. The invention provides the multi-modal remote sensing image matching method based on the convolutional neural network feature map, which is robust to nonlinear gray change and geometric deformation of the multi-modal remote sensing image and does not need any prior information.

Description

technical field [0001] The invention belongs to the technical field of image matching in remote sensing image processing, and in particular relates to a multimodal remote sensing image matching method based on convolutional neural network feature maps. Background technique [0002] With the rapid development of remote sensing technology, sensor types and data acquisition methods are becoming more and more diverse, and it is easy to obtain multi-modal remote sensing images of the same observation area. Since multimodal remote sensing images can reflect different characteristics of the same ground object, fusion processing of multimodal remote sensing images is helpful for image interpretation and better acquisition of ground object information. However, there may be non-linear grayscale changes caused by different sensor imaging principles, image background changes caused by differences in shooting time (such as new and demolished artificial targets, seasonal changes in veget...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62G06K9/00G06N3/04G06N3/08G06T7/33G06T7/35
CPCG06N3/08G06T7/337G06T7/35G06T2207/10032G06T2207/20081G06T2207/20084G06V20/13G06N3/048G06N3/045G06F18/22G06F18/214
Inventor 陈敏赵怡涛严少华朱庆
Owner SOUTHWEST JIAOTONG UNIV