Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Spatial relationship matching method and system applicable to video/image local features

A spatial relationship and local feature technology, applied in special data processing applications, instruments, electrical digital data processing, etc., can solve the problems of reducing SIFT's ability to distinguish, inability to distinguish, and reduced ability to distinguish

Active Publication Date: 2016-01-06
中科星云(鹤壁)人工智能研究院有限公司
View PDF6 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] However, in order to ensure the robustness of local features to various types of transformations, its ability to distinguish is severely reduced, and its outstanding performance is: 1) Only a small histogram of oriented gradients (HOG, Histogram of Oriented Gradient) in the neighborhood space is used as the center point The feature descriptor (reference Lowe, DavidG.Object recognition from localscale-invariant features. Proceeding of the International Conference on Computer Vision2.pp.1150–1157, 1999), for the text, grass and other objects with locally similar texture distribution in the video image, the descriptor cannot be distinguished; 2 ) The main direction extraction added in response to the rotation transformation, resulting in features that do not have direction recognition, such as the inability to distinguish between "6" and "9"
In order to improve the retrieval speed, the SIFT descriptor is quantified into multiple visual words through the Bag of Words (BoW, Bag of Words) technology (reference Sivic, Josef.Efficient visual search of videoscastastext retrieval.IEEETRANSACTIONSONPATTERNANALYSISANDMACHINEINTELLIGENCE, 31(4), pp.591–605, 2009) , which further reduces the discrimination ability of SIFT
[0004] There are usually two types of methods to improve SIFT features. One is to verify the spatial relationship of SIFT local feature points and remove matching points whose spatial relationship does not conform to the affine transformation. Typical methods are random sampling consistency (RANSAC, RANdomSAmpleConsensus) (see the literature for details. M.A.FischlerandR.C.Bolles.Randomsampleconsensus:aparadigmformodelfittingwithapplicationstoimageanalysisandautomatedcartography.CommunicationsoftheACM,24(6):381–395,1981), the disadvantage is that the computational complexity is high; the second is to map the SIFT feature to the secondary division in the low-dimensional space, and improve the bag of words The ability to distinguish medium and medium visual words, the typical method is Hamming Embedding (HE, HammingEmbedding) (see the literature H.J′egou, M.Douze, and C.Schmid.Hammingembeddingandweakgeometricconsistencyforlargescaleimagesearch.ECCV.2008 for details), the disadvantage is that the data dependence is high

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Spatial relationship matching method and system applicable to video/image local features
  • Spatial relationship matching method and system applicable to video/image local features
  • Spatial relationship matching method and system applicable to video/image local features

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0049] In order to solve the above technical problems, the present invention proposes a spatial relationship matching method suitable for video / image local features, including the following implementation steps:

[0050] Step 1, obtain all video / image feature points of the video / image and the attribute information of the video / image feature points, and obtain all the video / image features according to the video / image feature points and the attribute information The scale information of the point, through the scale information, determine the local neighborhood space of each video / image feature point, obtain the visual keyword codes of all the video / image feature points in the local neighborhood space, and Perform quantization processing on the visual keyword codes to generate new visual keyword codes, sort the new visual keyword codes, and generate spatial relationship codes of the video / image feature points;

[0051] Step 2, compare the video / image feature points to be matched ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention discloses a spatial relationship matching method and system applicable to video / image local features. The method comprises: acquiring scale information of all video / image feature points, determining a local neighborhood space of each video / image feature point, acquiring visual keyword codes of all the video / image feature points in the local neighborhood spaces, performing quantification treatment on the visual keyword codes to generate new visual keyword codes, and sequencing the new visual keyword codes to generate spatial relationship codes of the video / image feature points; and comparing spatial relationship codes between video / image feature points to be matched and the video / image feature points, constructing a relationship matrix, calculating the similarity of the spatial relationship codes between the video / image feature points to be matched and the video / image feature points in the relationship matrix, and fusing the visual similarity and the spacial relationship codes between video / image feature points to be matched and the video / image feature points.

Description

technical field [0001] The invention relates to a content-based image and video retrieval technology, in particular to a spatial relationship matching method and system suitable for video / image local features. Background technique [0002] The rapid growth of visual information such as Internet images and videos has brought great challenges to the organization and management of information. Similar image and video detection is an important technical means to implement video image content management and retrieval. Local features represented by Scale-Invariant Feature Transform (SIFT, Scale-InvariantFeatureTransform) provide a robust feature expression method for similar video image content detection. The feature of denaturation has become an extremely important technology in the application of video image content retrieval. [0003] However, in order to ensure the robustness of local features to various types of transformations, its ability to distinguish is severely reduced...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F17/30
CPCG06F16/783
Inventor 张冬明靳国庆袁庆升张勇东包秀国
Owner 中科星云(鹤壁)人工智能研究院有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products