Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Heterogeneous image matching method based on deep learning

A heterogeneous image and deep learning technology, applied in the field of image processing, can solve problems such as difficulty in improving accuracy, unfavorable fusion of multi-source data in a dual-branch structure, loss of spatial information in cascaded feature vectors, etc., to improve accuracy, Accelerate network convergence and facilitate the effect of integration

Active Publication Date: 2021-09-07
XIDIAN UNIV
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The heterogeneous image matching method based on deep learning provided by the present invention solves the existing heterogeneous image matching problem. Using a double-branch structure is not conducive to the mutual fusion of multi-source data. In addition, the cascaded feature vectors lose a lot of Spatial information, the problem that the accuracy rate is difficult to improve

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Heterogeneous image matching method based on deep learning
  • Heterogeneous image matching method based on deep learning
  • Heterogeneous image matching method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0061] In view of the current situation described in the background technology, the present invention has carried out research and innovation, and proposed a new heterogeneous image matching method based on deep learning, see figure 1 , including the following steps:

[0062] (1) Make a data set according to the heterogeneous images that need to be matched:

[0063] In order to make the effect of the algorithm more convincing, the present invention uses the public VIS-NIR data set; the data set has 9 groups, respectively: Country, Field, Forest, Indoor, Mountain, Oldbuilding, Street, Urban and Water, wherein , the matched heterogeneous image blocks and the unmatched heterogeneous image blocks in each data set account for half each. At the same time, the label corresponding to the matched heterogeneous image block is recorded as 1, and the label corresponding to the unmatched heterogeneous image block is 0.

[0064] See Table 1 for the size distribution of each group of data ...

Embodiment 2

[0081] The heterogeneous image matching method based on deep learning is the same as embodiment 1, and the method for feature map fusion in step (5) of the present invention specifically includes the following steps:

[0082] (5a) Note that the feature map corresponding to a single visible light image block is V, and the feature map corresponding to a single near-infrared image block is N, then the feature map after fusion is: F=N-V; wherein, V and N have the same size, and are three-dimensional matrices;

[0083] (5b) In order to prevent the existence of a large number of 0s in F and cause the gradient to disappear during the training process, we set the feature map of each batch β={F 1...m} for normalization:

[0084]

[0085]

[0086]

[0087]

[0088] where m represents the number of pairs of heterogeneous image patches input in each batch, and F i Represents the fused feature map corresponding to the i-th input data, γ and λ represent the scaling size and of...

Embodiment 3

[0091] The heterogeneous image matching method based on deep learning is the same as that in Embodiment 1-2, and the calculation process of the contrast loss in step (6a) of the present invention includes the following steps:

[0092] (6a1): Note that the feature vectors of feature map V and feature map N after global average pooling are v and n respectively; then the average Euclidean distance D(n,v) of the feature vector is:

[0093]

[0094] Wherein, k represents the dimension of the feature vector, and k is 512 in this embodiment.

[0095] (6a2): In order to make the D(n,v) corresponding to the matching heterogeneous image block as small as possible, and the D(n,v) corresponding to the unmatched heterogeneous image block as large as possible, then for a single sample we A contrastive loss function is designed:

[0096]

[0097] Among them, y represents the real label of the input data (when the input heterogeneous image block matches, y is 1; when it does not match,...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The heterogeneous image matching method based on deep learning provided by the present invention firstly makes a heterogeneous image block data set; image preprocessing; obtains the feature map of the image block; obtains the feature vector through the feature map; merges and normalizes the feature map; trains Image matching network; predicting matching probability; the present invention effectively overcomes the problem of over-fitting in heterogeneous image block matching in the prior art, greatly improves the performance of the network, improves the training efficiency of the network, and enhances the robustness of the network sex. The invention can be applied to the fields of heterogeneous image registration, image tracking, multi-view reconstruction and the like.

Description

technical field [0001] The invention belongs to the technical field of image processing, and in particular relates to a heterogeneous image matching method based on deep learning. Background technique [0002] Because images acquired by different devices of the same target can not only provide richer information, but also overcome the inherent defects of a single data source, multi-source image research is becoming more and more popular. In this paper, we focus on the similarity matching problem of multi-source databases, and verify the effectiveness of the algorithm with public datasets of visible light and near-infrared. For some data, please refer to figure 1 . Due to the non-linear relationship between visible and near-infrared cross-spectrum image pairs at the same target pixel value, this problem is more complicated than that of homologous visible light image matching. [0003] At this stage, image matching problems based on deep learning can be roughly divided into ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/62G06N3/04
CPCG06V10/757G06N3/045G06F18/241
Inventor 王爽焦李成方帅权豆王若静梁雪峰侯彪刘飞航
Owner XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products