Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Matching method of optical image and radar image based on multi-channel convolutional neural network

A convolutional neural network, optical image technology, applied in biological neural network models, neural architectures, instruments, etc., can solve problems such as inability to match accurately, and achieve the effect of fully utilizing, good feature space, and stable matching results

Active Publication Date: 2020-07-28
TONGJI UNIV
View PDF2 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

And there will be a certain displacement between the two types of image features, resulting in the inability to accurately match

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Matching method of optical image and radar image based on multi-channel convolutional neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0050] Such as figure 1 As shown, the following takes the matching of two 1000×1000 images as an example to describe the following steps in detail:

[0051] In the step "preprocessing", both the optical image and the radar image are compressed to 256×256, and then steps 401 and 401-1 are performed;

[0052] In step 401, 96 times of 7×7 convolutions are performed on the optical image, and the convolution result is input to the ReLU neuron; in step 401-1, 12 times of 7×7 convolution is performed on the SAR image, and the convolution result input to the ReLU neuron; then perform steps 402 and 402-1;

[0053] In step 402, the neuron output obtained in step 401 is subjected to 2×2 mean value downsampling; in step 402-1, the neuron output obtained in step 401-1 is subjected to 2×2 mean value downsampling; Then execute steps 403 and 403-1;

[0054]In step 403, 128 5×5 convolutions are performed on the optical image, and the convolution result is input to the ReLU neuron; in step 4...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a method for matching an optical image and a radar image based on a multi-channel convolutional neural network, comprising the following steps: 1) using a deep convolutional neural network to perform feature extraction on the optical image, and obtaining 32×32×32 of the optical image 2) Use another deep convolutional neural network to extract features from SAR images, and obtain 32×32×32 dimensional image feature data of SAR images; 3) Extract image feature data from SAR images and optical images Carry out cascading to form a joint feature; 4) Construct a matching network according to the joint feature, and perform full connection matching classification and output the matching result. Compared with the prior art, the present invention has the advantages of feature cascading, stable matching and the like.

Description

technical field [0001] The invention relates to the field of remote sensing data processing, in particular to a method for matching optical images and radar images based on a multi-channel convolutional neural network. Background technique [0002] At present, in the field of remote sensing data processing, multispectral optical images and Synthetic Aperture Radar (SAR) images are registered and then applied for processing. significance. In image registration, due to the huge differences in imaging methods, imaging frequencies, and number of image channels between optical images and SAR images, high-precision registration of the two is very difficult. [0003] Most of the existing general methods adopt two methods: [0004] 1) Integrate the multi-channel optical image into a single-channel grayscale image, and then directly calculate the normalized correlation coefficient with the SAR image of the same size, and judge whether the two match by the size of the normalized cor...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/62G06K9/46G06N3/04
CPCG06V10/40G06N3/045G06F18/22
Inventor 张绍明熊璐吴睿泽阳群益
Owner TONGJI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products