Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A spatial relationship matching method and system suitable for video/image local features

A technology of spatial relationship and local features, which is applied in special data processing applications, instruments, electrical digital data processing, etc., can solve the problems that the features do not have direction recognition, reduce the ability of SIFT to distinguish, and reduce the ability to distinguish

Active Publication Date: 2018-06-05
中科星云(鹤壁)人工智能研究院有限公司
View PDF6 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] However, in order to ensure the robustness of local features to various types of transformations, their ability to distinguish is severely reduced, and its outstanding performance is: 1) only a small histogram of oriented gradients (HOG, Histogram of OrientedGradient) in the neighborhood space is used as The feature descriptor of the center point (reference Lowe, David G. Object recognition from local scale-invariant features. Proceedings of the International Conference on Computer Vision 2.pp.1150–1157, 1999), has the characteristics of text, grass, etc. in the video image The descriptors of objects with locally similar texture distribution cannot be distinguished; 2) The main direction extraction added to deal with the rotation transformation causes the feature to have no direction recognition, such as the inability to distinguish "6" and "9"
In order to improve retrieval speed, bag of words (BoW, Bag of Words) technology (reference Sivic, Josef. Efficient visual search of videos cast as text retrieval. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 31(4), pp.591– 605, 2009), quantizing SIFT descriptors into multiple visual words further reduces the discriminative ability of SIFT
[0004] There are usually two types of methods to improve SIFT features. One is to verify the spatial relationship of SIFT local feature points and remove matching points whose spatial relationship does not conform to the affine transformation. Typical methods are random sampling consistency (RANSAC, RANdom SAmpleConsensus) (see Literature M.A.Fischler and R.C.Bolles.Random sample consensus:aparadigm for model fitting with applications to image analysis and automated cartography.Communications of the ACM,24(6):381–395,1981), the disadvantage is the high computational complexity; The SIFT feature is mapped to the low-dimensional space for secondary division to improve the ability to distinguish visual words in the bag of words. The typical method is Hamming Embedding (HE, Hamming Embedding) (for details, see the literature H.J′egou, M.Douze, and C.Schmid. Hamming embedding and weak geometric consistency for large scaleimage search.ECCV.2008), the disadvantage is that the data dependence is high

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A spatial relationship matching method and system suitable for video/image local features
  • A spatial relationship matching method and system suitable for video/image local features
  • A spatial relationship matching method and system suitable for video/image local features

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0049] In order to solve the above technical problems, the present invention proposes a spatial relationship matching method suitable for video / image local features, including the following implementation steps:

[0050] Step 1, obtain all video / image feature points of the video / image and the attribute information of the video / image feature points, and obtain all the video / image features according to the video / image feature points and the attribute information The scale information of the point, through the scale information, determine the local neighborhood space of each video / image feature point, obtain the visual keyword codes of all the video / image feature points in the local neighborhood space, and Perform quantization processing on the visual keyword codes to generate new visual keyword codes, sort the new visual keyword codes, and generate spatial relationship codes of the video / image feature points;

[0051] Step 2, compare the video / image feature points to be matched ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention discloses a spatial relationship matching method and system applicable to video / image local features. The method comprises: acquiring scale information of all video / image feature points, determining a local neighborhood space of each video / image feature point, acquiring visual keyword codes of all the video / image feature points in the local neighborhood spaces, performing quantification treatment on the visual keyword codes to generate new visual keyword codes, and sequencing the new visual keyword codes to generate spatial relationship codes of the video / image feature points; and comparing spatial relationship codes between video / image feature points to be matched and the video / image feature points, constructing a relationship matrix, calculating the similarity of the spatial relationship codes between the video / image feature points to be matched and the video / image feature points in the relationship matrix, and fusing the visual similarity and the spacial relationship codes between video / image feature points to be matched and the video / image feature points.

Description

technical field [0001] The invention relates to a content-based image and video retrieval technology, in particular to a spatial relationship matching method and system suitable for video / image local features. Background technique [0002] The rapid growth of visual information such as Internet images and videos has brought great challenges to the organization and management of information. Similar image and video detection is an important technical means to implement video image content management and retrieval. Local features represented by scale-invariant feature transform (SIFT, Scale-Invariant Feature Transform) provide a robust feature expression method for similar video image content detection, which can extract brightness, blur, viewing angle, rotation, etc. With the characteristic of invariance, it has become an extremely important technology in the application of video image content retrieval. [0003] However, in order to ensure the robustness of local features t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F17/30
CPCG06F16/783
Inventor 张冬明靳国庆袁庆升张勇东包秀国
Owner 中科星云(鹤壁)人工智能研究院有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products