An image registration method based on convolution neural network

A convolutional neural network and image registration technology, applied in the field of image processing, can solve problems such as uncorrectable distortion and limited neural network generation, and achieve the effect of ensuring accuracy and robustness

Inactive Publication Date: 2019-03-29
TIANJIN UNIV
View PDF1 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

But in terms of feature point matching, the neural network can only generate a limited, constant number of transformation parameters, and cannot correct complex distortions

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • An image registration method based on convolution neural network
  • An image registration method based on convolution neural network
  • An image registration method based on convolution neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0054] An embodiment of the present invention provides an image registration method based on a convolutional neural network, see figure 1 and figure 2 , the method includes the following steps:

[0055] 101: Using the VGG-16 convolutional network to extract feature points from the reference image and the moving image respectively, so as to generate a reference feature point set and a moving feature point set;

[0056] 102: When the distance matrix of the feature points satisfies the first and second constraint conditions at the same time, perform a pre-matching operation, that is, the feature point x in the reference feature point set and the feature point y in the moving feature point set are matching points ;

[0057] 103: Set a certain threshold, combine iterations to dynamically select the interior points of the pre-matched feature points, filter out the final feature points, and obtain the prior probability matrix;

[0058] 104: Find the optimal parameters according t...

Embodiment 2

[0065] The following is combined with specific calculation formulas, examples, Figure 1-Figure 2 , the scheme in embodiment 1 is further introduced, see the following description for details:

[0066] 201: Using VGG-16 convolutional network in reference image I X All the feature points are extracted, and then the reference feature point set X is generated, and the VGG-16 convolutional network is used to move the image I Y All the feature points are extracted from above, and then the moving feature point set Y is generated;

[0067] For specific implementation, refer to image I X and moving images I Y The dimensions are unified to 224×224 in length and width, so as to obtain a receptive field of a suitable size (a term known in the art) and reduce the amount of calculation.

[0068] Among them, the VGG-16 convolutional network includes five sections of convolution, each section has 2-3 convolutional layers, and each section is connected to a maximum pooling layer at the en...

Embodiment 3

[0119] Below in conjunction with specific example, calculation formula, the scheme in embodiment 1 and 2 is further introduced, see the following description for details:

[0120] 301: Extract feature points:

[0121] Using the VGG-16 convolutional network in the reference image I X Extract feature points to generate a reference feature point set X, in the moving image I Y Extract the feature points above to generate the mobile feature point set Y, combined with the attached figure 1 To further explain the network construction;

[0122] 1) The reference image I X , moving image I Y The size is unified to 224×224 to obtain a suitable size of the receptive field and reduce the amount of calculation.

[0123] 2) The VGG-16 convolutional network includes 5 parts of convolution calculation, using a 28×28 grid to segment the reference image I X , moving image I Y . The output of the pooling layer pool3 layer obtains a feature map of 256d, and a feature descriptor is generate...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an image registration method based on a convolution neural network. The method includes the steps: using a VGG-16 convolution network to extract feature points from that reference image and the move image respectively, thereby generating a reference feature point set and a moving feature point set; When the distance matrix of the feature points simultaneously satisfies thefirst and second constraint conditions, performing a pre-matching operation, that is, a feature point x in the reference feature point set and a feature point y in the moving feature point set are matching points; setting a certain threshold, combining with iteration to select the dynamic interior points of the pre-matched feature points, screening out the final feature points, and obtaining a priori probability matrix; according to a priori probability matrix and EM algorithm, searching for the optimal parameters and realizing image registration. The invention dynamically increases the interior point step by step through the dynamic interior point selection when the characteristic point is matched, and improves the registration accuracy rate.

Description

technical field [0001] The invention relates to the technical field of image processing, in particular to an image registration method based on a convolutional neural network. Background technique [0002] Image registration is one of the important tasks in the field of image processing, and it is also the basis of image fusion. Since the image registration data comes from different shooting times, different angles or different physical devices, etc., how to select stable feature points and match them correctly becomes the key issue of registration. [0003] At present, the traditional Scale Invariant Feature Transform (SIFT) detection algorithm and some improved algorithms based on it can basically realize the selection of feature points, but for multi-temporal or multi-modal image registration, due to the surface If there is a large difference, the SIFT algorithm may generate many outliers, or even not detect enough feature points, thus limiting the application of image r...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/33
CPCG06T7/33G06T2207/10004G06T2207/20081G06T2207/20084
Inventor 吕卫赵薇褚晶辉
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products