Unlock instant, AI-driven research and patent intelligence for your innovation.

Image matching method and device based on graph neural network fusion model

A neural network and fusion model technology, applied in the field of image matching based on graph neural network fusion model, can solve the problem of low image matching accuracy in complex scenes, and achieve the elimination of the interference of inappropriate content, good matching performance, good stability and stability. The effect of generality

Pending Publication Date: 2021-12-31
PLA STRATEGIC SUPPORT FORCE INFORMATION ENG UNIV PLA SSF IEU
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The purpose of this application is to provide an image matching method based on a graph neural network fusion model to solve the problem of low image matching accuracy in complex scenes

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image matching method and device based on graph neural network fusion model
  • Image matching method and device based on graph neural network fusion model
  • Image matching method and device based on graph neural network fusion model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0023] To solve the problem of the background technology, you can use the matching of the entity in the image to complete the matching task of the entire image. At the same time, in recent years, the development of GRAPH Neural Network (GNN) provides a good tool for map structure data, and also provides new ideas for the spatial relationship characteristics of the entities in the image.

[0024] The method of the present invention uses an entity to match the mission, and can effectively eliminate interference of the content in the image. The present invention uses the figure structure data based on the solid structure to train the diagram neural network, and the obtained network model can effectively extract the spatial relationship between the entities in the image. After fusion of the visual features and spatial relationship characteristics of the entity, the similarity between the imaging pair is determined whether the image is matched. The experimental results show that the im...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to an image matching method and device based on a graph neural network fusion model, and the method comprises the following steps: 1), carrying out the entity target detection of an image through employing an entity detection algorithm, and obtaining image blocks containing entities; and extracting visual features of the entity image blocks by using a network based on visual feature extraction; 2) forming all entity image blocks in the image into a graph form, and further extracting spatial relation features by using a graph neural network; 3) aggregating the visual features of the entities to obtain a visual feature aggregation vector; and fusing the visual feature aggregation vector and the spatial relationship feature by using a feature fusion network, after fusion, calculating the Euclidean distance of the feature vector to measure the similarity between the image pairs, and if the Euclidean distance of the feature vector of the image pairs is smaller than a threshold value, predicting the image pairs to be matched, otherwise, predicting the image pairs to be not matched. According to the method, the interference of irrelevant contents in the image on matching is effectively avoided, and higher matching precision is achieved.

Description

Technical field [0001] The present invention relates to the field of image matching, particularly an image matching method and apparatus based on the fusion model of the map nerve network. Background technique [0002] Image matching refers to a method of determining an image pair similarity and consistency by a correspondence relationship between an image content, a feature, a structure, and the like. Image matches have a wide range of applications in the computer vision, such as image-based navigation, location identification, and closed-loop detection in the SLAM. Image matching methods can be divided into two categories: the first category is based on method of manual design features, using image descriptors to represent partial features, such as SIFT, SURF, and ORB; and further use all descriptors in the image using the polymerization model Features are polymerized, such as the phrase model (Bag-of-Words, BOW), vector of local polymerization descriptive, VLAD, Fisher vector ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62G06N3/04G06N3/08G06F16/583
CPCG06N3/08G06F16/583G06N3/045G06F18/22G06F18/253
Inventor 李科彭锦超万刚李锋曹雪峰
Owner PLA STRATEGIC SUPPORT FORCE INFORMATION ENG UNIV PLA SSF IEU