Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

451 results about "Scale-invariant feature transform" patented technology

The scale-invariant feature transform (SIFT) is a feature detection algorithm in computer vision to detect and describe local features in images. It was patented in Canada by the University of British Columbia and published by David Lowe in 1999. Applications include object recognition, robotic mapping and navigation, image stitching, 3D modeling, gesture recognition, video tracking, individual identification of wildlife and match moving.

Improved multi-instrument reading identification method of transformer station inspection robot

InactiveCN103927507AImprove robustnessMeet the requirements of automatic detection and identification of readingsCharacter and pattern recognitionHough transformScale-invariant feature transform
The invention discloses an improved multi-instrument reading identification method of a transformer station inspection robot. In the method, first of all, for instrument equipment images of different types, equipment template processing is carried out, and position information of min scales and max scales of each instrument in a template database. For the instrument equipment images acquired in real time by the robot, a template graph of a corresponding piece of equipment is scheduled from a background service, by use of a scale invariant feature transform (SIFT) algorithm, an instrument dial plate area sub-image is extracted in an input image in a matching mode, afterwards, binary and instrument point backbone processing is performed on the dial plate sub-image, by use of rapid Hough transform, pointer lines are detected, noise interference is eliminated, accurate position and directional angel of a pointer are accurately positioned, and pointer reading is finished. Such an algorithm is subjected to an on-site test of some domestic 500 kv intelligent transformer station inspection robot, the integration recognition rate of various instruments exceeds 99%, the precision and robustness for instrument reading are high, and the requirement for on-site application of a transformer station is completely satisfied.
Owner:STATE GRID INTELLIGENCE TECH CO LTD

Remote sensing image registration method of multi-source sensor

The invention provides a remote sensing image registration method of a multi-source sensor, relating to an image processing technology. The remote sensing image registration method comprises the following steps of: respectively carrying out scale-invariant feature transform (SIFT) on a reference image and a registration image, extracting feature points, calculating the nearest Euclidean distances and the nearer Euclidean distances of the feature points in the image to be registered and the reference image, and screening an optimal matching point pair according to a ratio; rejecting error registration points through a random consistency sampling algorithm, and screening an original registration point pair; calculating distribution quality parameters of feature point pairs and selecting effective control point parts with uniform distribution according to a feature point weight coefficient; searching an optimal registration point in control points of the image to be registered according to a mutual information assimilation judging criteria, thus obtaining an optimal registration point pair of the control points; and acquiring a geometric deformation parameter of the image to be registered by polynomial parameter transformation, thus realizing the accurate registration of the image to be registered and the reference image. The remote sensing image registration method provided by the invention has the advantages of high calculation speed and high registration precision, and can meet the registration requirements of a multi-sensor, multi-temporal and multi-view remote sensing image.
Owner:济钢防务技术有限公司

Image stitching method based on overlapping region scale-invariant feather transform (SIFT) feature points

The invention discloses an image stitching method based on overlapping region scale-invariant feather transform (SIFT) feature points and belongs to the technical field of image processing. Aiming at the problems that the algorithm computation is large and subsequent matching error and computing redundancy are easily caused due to the non-overlapping region features because of extraction of the features of the whole image in the conventional image stitching algorithm based on features, the invention provides an image stitching method based on the overlapping region SIFT feature points. According to the method, only the feature points in the image overlapping region are extracted, the number of the feature points is reduced, and the algorithm computation is greatly reduced; and moreover, the feature points are represented by employing an improved SIFT feature vector extraction method, the computation during feature point matching is further reduced, and the mismatching rate is reduced. The invention also discloses an image stitching method with optical imaging difference, wherein the image stitching method comprises the following steps of: converting two images with optical imaging difference to be stitched to a cylindrical coordinate space by utilizing projection transformation, and stitching the images by using the image stitching method based on the overlapping region SIFT feature points.
Owner:HOHAI UNIV

Non-rigid heart image grading and registering method based on optical flow field model

The invention discloses a non-rigid heart image grading and registering method based on an optical flow field model, which belongs to the technical field of image processing. The method comprises the following steps of: obtaining an affine transformation coefficient through the scale invariant characteristic vectors of two images, and obtained a rough registration image through affine transformation; and obtaining bias transformation of the rough registration image by using an optical flow field method, and interpolating to obtain a fine registration image. In the non-rigid heart image grading and registering method, an SIFT (Scale Invariant Feature Transform) characteristic method and an optical flow field method are complementary to each other, the SIFT characteristic is used for making preparations for increasing the converging speed of the optical flow field method, and the registration result is more accurate through the optical flow field method; and the characteristic details of a heart image are better kept, higher anti-noising capability and robustness are achieved, and an accurate registration result is obtained. Due to the adopted difference value method, a linear difference value and a central difference are combined, and final registration is realized by adopting a multi-resolution strategy in the method simultaneously.
Owner:INNER MONGOLIA UNIV OF SCI & TECH

Image classification method based on hierarchical SIFT (scale-invariant feature transform) features and sparse coding

InactiveCN103020647AReduce the dimensionality of SIFT featuresHigh simulationCharacter and pattern recognitionSingular value decompositionData set
The invention discloses an image classification method based on hierarchical SIFT (scale-invariant feature transform) features and sparse coding. The method includes the implementation steps: (1) extracting 512-dimension scale unchanged SIFT features from each image in a data set according to 8-pixel step length and 32X32 pixel blocks; (2) applying a space maximization pool method to the SIFT features of each image block so that a 168-dimension vector y is obtained; (3) selecting several blocks from all 32X32 image blocks in the data set randomly and training a dictionary D by the aid of a K-singular value decomposition method; (4) as for the vectors y of all blocks in each image, performing sparse representation for the dictionary D; (5) applying the method in the step (2) for all sparse representations of each image so that feature representations of the whole image are obtained; and (6) inputting the feature representations of the images into a linear SVM (support vector machine) classifier so that classification results of the images are obtained. The image classification method has the advantages of capabilities of capturing local image structured information and removing image low-level feature redundancy and can be used for target identification.
Owner:XIDIAN UNIV

Layered topological structure based map splicing method for multi-robot system

InactiveCN103247040AImprove accuracySolve the problem of creating efficiencyImage enhancementScale-invariant feature transformMultirobot systems
The invention belongs to the field of intelligent movable robots, and discloses a layered topological structure based map splicing method for a multi-robot system in an unknown environment and solves the map splicing problem of the multi-robot system in case the position and posture are unknown. The method comprises the following steps: acquiring an accessible space tree, building a layered topological structure, creating a global topological map, extracting SIFT (Scale Invariant Feature Transform) features and performing feature matching, and performing map splicing based on ICP (Iterative Closest Point) scanning matching. According to the invention, under the condition that the relative positions and gestures of robots are unknown, a layered topological structure merging SIFT features is provided, the global topological map is created in an increment manner, and the map splicing of the multi-robot system under large scale unknown environment is realized according to the SIFT information among the nodes, in combination with a scanning matching method; and the splicing accuracy and real-time performance are effectively improved. Therefore, the method is suitable for the field of intelligent mobile robots related to map creation and map splicing.
Owner:BEIJING UNIV OF TECH

Uncalibrated multi-viewpoint image correction method for parallel camera array

InactiveCN102065313AFreely adjust horizontal parallaxIncrease the use range of multi-look correctionImage analysisSteroscopic systemsParallaxScale-invariant feature transform
The invention relates to an uncalibrated multi-viewpoint image correction method for parallel camera array. The method comprises the steps of: at first, extracting a set of characteristic points in viewpoint images and determining matching point pairs of every two adjacent images; then introducing RANSAC (Random Sample Consensus) algorithm to enhance the matching precision of SIFT (Scale Invariant Feature Transform) characteristic points, and providing a blocking characteristic extraction method to take the fined positional information of the characteristic points as the input in the subsequent correction processes so as to calculate a correction matrix of uncalibrated stereoscopic image pairs; then projecting a plurality of non-coplanar correction planes onto the same common correction plane and calculating the horizontal distance between the adjacent viewpoints on the common correction plane; and finally, adjusting the positions of the viewpoints horizontally until parallaxes are uniform, namely completing the correction. The composite stereoscopic image after the multi-viewpoint uncalibrated correction of the invention has quite strong sense of width and breadth, prominently enhanced stereoscopic effect compared with the image before the correction, and can be applied to front-end signal processing of a great many of 3DTV application devices.
Owner:SHANGHAI UNIV

Privacy-protection index generation method for mass image retrieval

The invention discloses a privacy-protection index generation method for mass image retrieval, relates to the privacy protection problem in mass image retrieval and involves with taking privacy protection into image retrieval. The method is used for establishing an image index with privacy protection, and therefore, the safety of the privacy information of a user can be protected while the retrieval performance is guaranteed. The method comprises the steps of firstly, extracting and optimizing SIFT (Scale Invariant Feature Transform) and HSV (Hue, Saturation and Value) color histogram, performing feature dimension reduction by use of a use of a manifold dimension reduction method of locality preserving projections, and constructing a vocabulary tree by using the dimension-reduced feature data. The vocabulary tree is used for constructing an inverted index structure; the method is capable of reducing the number of features, increasing the speed of plaintext domain image retrieval and also optimizing the performance of image retrieval. The method is characterized in that privacy protection is added on the basis of a plaintext domain retrieval framework and the inverted index is double encrypted by use of binary random codes and random projections, and therefore, the image index with privacy protection is realized.
Owner:数安信(北京)科技有限公司

Method for automatically tagging animation scenes for matching through comprehensively utilizing overall color feature and local invariant features

The invention discloses a method for automatically tagging animation scenes for matching through comprehensively utilizing an overall color feature and local invariant features, which aims to improve the tagging accuracy and tagging speed of animation scenes through comprehensively utilizing overall color features and color-invariant-based local invariant features. The technical scheme is as follows: preprocessing a target image (namely, an image to be tagged), calculating an overall color similarity between the target image and images in an animation scene image library, and carrying out color feature filtering on the obtained result; after color feature filtering, extracting a matching image result and the colored scale invariant feature transform (CSIFT) feature of the target image, and calculating an overall color similarity and local color similarities between the matching image result and the CSIFT feature; fusing the overall color similarity and the local color similarities so as to obtain a final total similarity; and carrying out text processing and combination on the tagging information of the images in the matching result so as to obtain the final tagging information of the target image. By using the method provided by the invention, the matching accuracy and matching speed of an animation scene can be improved.
Owner:NAT UNIV OF DEFENSE TECH

Agricultural pest image recognition method based on multi-feature deep learning technology

The invention relates to an agricultural pest image recognition method based on a multi-feature deep learning technology. In comparison with the prior art, a defect of poor pest image recognition performance under the complex environment condition is solved. The method comprises the following steps of carrying out multi-feature extraction on large-scale pest image samples and extracting color features, texture features, shape features, scale-invariant feature conversion features and directional gradient histogram features of the large-scale pest image samples; carrying out multi-feature deep learning and respectively carrying out unsupervised dictionary training on different types of features to obtain sparse representation of the different types of features; carrying out multi-feature representation on training samples and constructing a multi-feature representation form-multi-feature sparse coding histogram for the pest image samples through combining different types of features; and constructing a multi-core learning classifier and constructing a multi-core classifier through learning a sparse coding histogram for positive and negative pest image samples to classify pest images. According to the method, the accuracy for pest recognition is improved.
Owner:HEFEI INSTITUTES OF PHYSICAL SCIENCE - CHINESE ACAD OF SCI

Two-dimensional image sequence based three-dimensional reconstruction method of target

The invention discloses a two-dimensional image sequence based three-dimensional reconstruction method of a target, and relates to the three-dimensional reconstruction method of the target, which solves the problem that in the traditional image-based three-dimensional reconstruction method, the reconstruction precision is low due to more points needing to be reconstructed and large calculation quantity. The three-dimensional reconstruction method comprises the following steps of: using a camera to obtain a two-dimensional image sequence of the target, calculating and matching each image through a scale invariant feature transform (SIFT) algorithm, and calculating the geometric relationship between images; carrying out the corner detection of each image in a Gaussian scale pyramid generated in the realizing process of the SIFT algorithm, and obtaining the multi-scale corner features of the images; taking the obtained SIFT matching point as a center, searching a corner corresponding to each image in a limited range of a restrained distance, and matching the corners obtained by each image to obtain the matched corner; and realizing the three-dimensional reconstruction of the target by carrying out the three-dimensional reconstruction of the matched corner according to a projection matrix of a camera. The two-dimensional image sequence based three-dimensional reconstruction method is applied to the three-dimensional reconstruction of the target.
Owner:HARBIN INST OF TECH

Multimodal image feature extraction and matching method based on ASIFT (affine scale invariant feature transform)

InactiveCN102231191AHas a completely affine invariant propertySolve the unsolvable affine invariance problemImage analysisCharacter and pattern recognitionFeature vectorScale-invariant feature transform
The invention discloses a multimodal feature extraction and matching method based on ASIFT (affine scale invariant feature transform), and the method is mainly used for realizing the point feature extraction and matching of the multimodal image which cannot be solved in the prior art. The method can be realized through the following steps: carrying out sampling on the ASIFT affine transformational model tilting value parameters and longitude parameters, thus obtaining two groups of views of two input images; adopting a difference of Gauss (DoG) feature detection method to detect the position and size of the feature point on the two groups of views; using an average squared-gradient method to set the principle directions of the features and setting the feature vector amplitude by a counting method; calculating the symmetric ASIFT descriptor of the features; and adopting a nearest neighborhood method to carry out coarse matching on the symmetric ASIFT descriptor, and using an optimized random sampling method to remove mis-matching features. In the invention, features can be extracted and matched in the images sensed by various sensors, and the method provided by the invention has the characteristic of invariability after complete affine, and can be applied to the fields of object recognition and tracking, image registration and the like.
Owner:XIDIAN UNIV

Image search method based on CGCI-SIFT (consistence index-scale invariant feature transform) partial feature

The invention discloses an image search method based on a CGCI-SIFT (consistence index-scale invariant feature transform) partial feature. The image search method is realized based on CGCI-SIFT and comprises the following steps of: starting from the strength and the distribution of the influence of a neighborhood domain pixel to a key point; establishing a periphery partial feature descriptor through gray level texture comparison strength information; and subsequently establishing a central partial descriptor by combining direction gradient information having relatively high central feature point description so as to form a final description, wherein the CGCI-SIFT utilizes the contrast property of a partial area and is combined with gradient information of the original SIFT algorithm, rather than that the SIFT in which the weight and the direction of the gradient are singly stored, so that the CGCI-SIFT has relatively comprehensive geometric and optical conversion invariance. Due to the utilization of the gray level texture comparison strength information, the CGCI-SIFT is simple in calculation, thereby being relatively efficient and relatively applicable to real-time application. Tests show that according to the search method, the performance is stable, the search time is short, and a remarkable improvement in the search effect can be realized.
Owner:SUZHOU SOUKE INFORMATION TECH

Intelligent identification method and device for baby sleep position

The invention discloses an intelligent identification method and device for a baby sleep position. A video analysis and mode identification method is adopted to identify the baby sleep position so as to timely discover important baby events like a baby kicks out a quilt, the face of the baby is covered by clothes, or the baby sleeps on the stomach. The method consists of three parts, namely sample feature modeling, real-time feature analysis and alarm judgment. In the sample feature modeling, textural features and SIFT (Scale Invariant Feature Transform) features of a sample image are analyzed, a sample feature template base is generated by means of feature fusion, furthermore, the features of a set monitoring area are analyzed while the real-time features are analyzed, the sleep position is identified in combination with the sample feature template base, the alarm type is judged, and then the alarm information is output. Due to the intelligent identification method and device, a guardian does not have to monitor the baby all the time by videos or observation on site, especially when the guardian sleeps soundly at night, the important baby events can be effectively and intelligently detected, identified and warned early and timely.
Owner:深圳市瑞工科技有限公司

Three-dimensional reconstruction method of panoramic image in mixed vision system

The invention relates to a three-dimensional reconstruction method of a panoramic image in a mixed vision system. The mixed vision system comprises an RGB-D (Red Green Blue-Digital) camera and a panoramic camera. The method comprises the following steps: firstly, calibrating the mixed vision system, thereby obtaining internal and external parameters of the mixed vision system; secondly, determining a common field of view of the mixed vision system according to a space relation and the internal and external parameters of the mixed vision system; then obtaining characteristic matching points of the common field of view of the mixed vision system on the basis of SIFT (Scale Invariant Feature Transform)+RANSAC (Random Sample Consensus); and finally, on the basis of depth information of the RGB-D camera, carrying out three-dimensional reconstruction on the matching characteristic points in the panoramic camera. The three-dimensional reconstruction method has the advantages that three-dimensional information of characteristic points in the common field of view in the panoramic image in the mixed vision system can be obtained, and three-dimensional reconstruction of the panoramic image with low complexity and high quality can be realized.
Owner:福建旗山湖医疗科技有限公司

Video copy detection method based on multi-feature Hash

The invention discloses a video copy detection method based on multi-feature Hash, which mainly solves the problem that detection efficiency and detection accuracy cannot be effectively balanced in the exiting video copy detection algorithm. The video copy detection method based on multi-feature Hash comprises the following realization steps of: (1) extracting the pyramid histogram of oriented gradients (PHOG) of a key frame as the global feature of the key frame; (2) extracting a weighted contrast histogram based on scale invariant feature transform (SIFT) of the key frame as the local feature of the key frame; (3) establishing a target function by a similarity-preserving multi-feature Hash learning SPM2H algorithm, and obtaining L Hash functions by optimization solution; (4) mapping the key frame of a database video and the key frame of an inquired video into an L-dimensional Hash code by virtue of the L Hash functions; (5) judging whether the inquired video is the copied video or not through feature matching. The video copy detection method based on multi-feature Hash disclosed by the invention is good in robustness for multiple attacks, and capable of being used for copyright protection, copy control and data mining for digital videos on the Internet.
Owner:XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products