Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

495 results about "Invariant feature" patented technology

An object that does not change or its characteristic when the object is viewed under different circumstances. Features that are invariant and are unaffected by manipulations of the observer or object. INVARIANT FEATURE: "Invariant Feature is object recognition by humans or machines ". APPARENT DISTANCE.

Method and apparatus for determining absolute position of a tip of an elongate object on a plane surface with invariant features

A method and apparatus for determining a pose of an elongate object and an absolute position of its tip while the tip is in contact with a plane surface having invariant features. The surface and features are illuminated with a probe radiation and a scattered portion, e.g., the back-scattered portion, of the probe radiation returning from the plane surface and the feature to the elongate object at an angle τ with respect to an axis of the object is detected. The pose is derived from a response of the scattered portion to the surface and the features and the absolute position of the tip on the surface is obtained from the pose and knowledge about the feature. The probe radiation can be directed from the object to the surface at an angle σ to the axis of the object in the form of a scan beam. The scan beam can be made to follow a scan pattern with the aid of a scanning arrangement with one or more arms and one or more uniaxial or biaxial scanners. Angle τ can also be varied, e.g., with the aid of a separate or the same scanning arrangement as used to direct probe radiation to the surface. The object can be a pointer, a robotic arm, a cane or a jotting implement such as a pen, and the features can be edges, micro-structure or macro-structure belonging to, deposited on or attached to the surface which the tip of the object is contacting.
Owner:ELECTRONICS SCRIPTING PRODS

Improved multi-instrument reading identification method of transformer station inspection robot

InactiveCN103927507AImprove robustnessMeet the requirements of automatic detection and identification of readingsCharacter and pattern recognitionHough transformScale-invariant feature transform
The invention discloses an improved multi-instrument reading identification method of a transformer station inspection robot. In the method, first of all, for instrument equipment images of different types, equipment template processing is carried out, and position information of min scales and max scales of each instrument in a template database. For the instrument equipment images acquired in real time by the robot, a template graph of a corresponding piece of equipment is scheduled from a background service, by use of a scale invariant feature transform (SIFT) algorithm, an instrument dial plate area sub-image is extracted in an input image in a matching mode, afterwards, binary and instrument point backbone processing is performed on the dial plate sub-image, by use of rapid Hough transform, pointer lines are detected, noise interference is eliminated, accurate position and directional angel of a pointer are accurately positioned, and pointer reading is finished. Such an algorithm is subjected to an on-site test of some domestic 500 kv intelligent transformer station inspection robot, the integration recognition rate of various instruments exceeds 99%, the precision and robustness for instrument reading are high, and the requirement for on-site application of a transformer station is completely satisfied.
Owner:STATE GRID INTELLIGENCE TECH CO LTD

Remote sensing image registration method of multi-source sensor

The invention provides a remote sensing image registration method of a multi-source sensor, relating to an image processing technology. The remote sensing image registration method comprises the following steps of: respectively carrying out scale-invariant feature transform (SIFT) on a reference image and a registration image, extracting feature points, calculating the nearest Euclidean distances and the nearer Euclidean distances of the feature points in the image to be registered and the reference image, and screening an optimal matching point pair according to a ratio; rejecting error registration points through a random consistency sampling algorithm, and screening an original registration point pair; calculating distribution quality parameters of feature point pairs and selecting effective control point parts with uniform distribution according to a feature point weight coefficient; searching an optimal registration point in control points of the image to be registered according to a mutual information assimilation judging criteria, thus obtaining an optimal registration point pair of the control points; and acquiring a geometric deformation parameter of the image to be registered by polynomial parameter transformation, thus realizing the accurate registration of the image to be registered and the reference image. The remote sensing image registration method provided by the invention has the advantages of high calculation speed and high registration precision, and can meet the registration requirements of a multi-sensor, multi-temporal and multi-view remote sensing image.
Owner:济钢防务技术有限公司

Automatic registration method for three-dimensional point cloud data

The invention discloses an automatic registration method for three-dimensional point cloud data. The method comprises the steps that two point clouds to be registered are sampled to obtain feature points, rotation invariant feature factors of the feature points are calculated, and the rotation invariant feature factors of the feature points in the two point clouds are subjected to matching search to obtain an initial corresponding relation between the feature points; then, a random sample consensus algorithm is adopted to judge and remove mismatching points existing in an initial matching point set to obtain an optimized feature point corresponding relation, and a rough rigid transformation relation between the two point clouds is obtained through calculation to realize rough registration; a rigid transformation consistency detection algorithm is provided, a local coordinate system transformation relation between the matching feature points is utilized to perform binding detection on the rough registration result, and verification of the correctness of the rough registration result is completed; and an ICP algorithm is adopted to optimize the rigid transformation relation between the point cloud data to realize automatic precise registration of the point clouds finally.
Owner:HUAZHONG UNIV OF SCI & TECH

Apparatus and method for determining an absolute pose of a manipulated object in a real three-dimensional environment with invariant features

An apparatus and method for optically inferring an absolute pose of a manipulated object in a real three-dimensional environment from on-board the object with the aid of an on-board optical measuring arrangement. At least one invariant feature located in the environment is used by the arrangement for inferring the absolute pose. The inferred absolute pose is expressed with absolute pose data (φ,θ,ψ,x,y,z) that represents Euler rotated object coordinates expressed in world coordinates (Xo,Yo,Zo) with respect to a reference location, such as, for example, the world origin. Other conventions for expressing absolute pose data in three-dimensional space and representing all six degrees of freedom (three translational degrees of freedom and three rotational degrees of freedom) are also supported. Irrespective of format, a processor prepares the absolute pose data and identifies a subset that may contain all or fewer than all absolute pose parameters. This subset is transmitted to an application via a communication link, where it is treated as input that allows a user of the manipulated object to interact with the application and its output.
Owner:ELECTRONICS SCRIPTING PRODS

Mechanical failure migration diagnosis method and system based on adversarial learning

The invention discloses a mechanical failure migration diagnosis method and system based on adversarial learning. The method comprises the following steps: acquiring and analyzing original signals ofmechanical failure under different working conditions to generate a labeled source domain training dataset, an unlabelled source domain training dataset and a target domain test dataset under different working conditions; training a deep convolutional neutral network model according to the labeled source domain training dataset and a back propagation algorithm to generate a failure diagnosis model; training the failure diagnosis model according to the unlabelled source domain training dataset and the target domain test dataset; fine adjusting the trained failure diagnosis model according to the labeled source domain training dataset and the back propagation algorithm; inputting the unlabelled target domain test dataset into the fine adjusted failure diagnosis model, and outputting the failure category of a to-be-tested sample. By means of the method, the domain invariant feature is obtained with the adversarial learning method, migration among different domains is realized, and intelligent diagnosis of mechanical failure under variable working conditions is realized.
Owner:TSINGHUA UNIV

Target automatically recognizing and tracking method based on affine invariant point and optical flow calculation

The invention discloses a target automatically recognizing and tracking method based on affine invariant points and optical flow calculation, which comprises the following steps: firstly, carrying out image pretreatment on a target image and video frames and extracting affine invariant feature points; then, carrying out feature point matching, eliminating mismatching points; determining the target recognition success when the feature point matching pairs reach certain number and affine conversion matrixes can be generated; then, utilizing the affine invariant points collected in the former step for feature optical flow calculation to realize the real-time target tracking; and immediately returning to the first step for carrying out the target recognition again if the tracking of middle targets fails. The feature point operator used by the invention belongs to an image local feature description operator which is based on the metric space and maintains the unchanged image zooming and rotation or even affine conversion. In addition, the adopted optical flow calculation method has the advantages of small calculation amount and high accuracy, and can realize the real-time tracking. The invention is widely applied to the fields of video monitoring, image searching, computer aided driving systems, robots and the like.
Owner:NANJING UNIV OF SCI & TECH

Method and system for media audience measurement and spatial extrapolation based on site, display, crowd, and viewership characterization

The present invention provides a comprehensive method to design an automatic media viewership measurement system, from the problem of sensor placement for an effective sampling of the viewership to the method of extrapolating spatially sampled viewership data. The system elements that affect the viewership—site, display, crowd, and audience—are identified first. The site-viewership analysis derives some of the crucial elements in determining an effective data sampling plan: visibility, occupancy, and viewership relevancy. The viewership sampling map is computed based on the visibility map, the occupancy map, and the viewership relevancy map; the viewership measurement sensors are placed so that the sensor coverage maximizes the viewership sampling map. The crowd-viewership analysis derives a model of the viewership in relation to the system parameters so that the viewership extrapolation can effectively adapt to the time-changing spatial distribution of the viewership; the step identifies crowd dynamics, and its invariant features as the crucial elements that extract the influence of the site, display, and the crowd to the temporal changes of viewership. The extrapolation map is formulated around these quantities, so that the site-wide viewership can be effectively estimated from the sampled viewership measurement.
Owner:VIDEOMINING CORP

Non-rigid heart image grading and registering method based on optical flow field model

The invention discloses a non-rigid heart image grading and registering method based on an optical flow field model, which belongs to the technical field of image processing. The method comprises the following steps of: obtaining an affine transformation coefficient through the scale invariant characteristic vectors of two images, and obtained a rough registration image through affine transformation; and obtaining bias transformation of the rough registration image by using an optical flow field method, and interpolating to obtain a fine registration image. In the non-rigid heart image grading and registering method, an SIFT (Scale Invariant Feature Transform) characteristic method and an optical flow field method are complementary to each other, the SIFT characteristic is used for making preparations for increasing the converging speed of the optical flow field method, and the registration result is more accurate through the optical flow field method; and the characteristic details of a heart image are better kept, higher anti-noising capability and robustness are achieved, and an accurate registration result is obtained. Due to the adopted difference value method, a linear difference value and a central difference are combined, and final registration is realized by adopting a multi-resolution strategy in the method simultaneously.
Owner:INNER MONGOLIA UNIV OF SCI & TECH

Unmanned aerial vehicle three-dimensional map construction method and device, computer equipment and storage medium

The invention relates to an unmanned aerial vehicle three-dimensional map construction method. The method comprises the following steps of obtaining a video frame image shot by a camera, extracting feature points in each video frame image; matching the feature points by adopting a color histogram and scale invariant feature transformation hybrid matching algorithm to obtain feature point matchingpairs; calculating according to the feature point matching pairs to obtain a pose transformation matrix; determining a three-dimensional coordinate corresponding to each video frame image according tothe pose transformation matrix, and converting the three-dimensional coordinates of the feature points in the video frame image into a world coordinate system to obtain a three-dimensional point cloud map, taking the video frame image as the input of a target detection model to obtain target object information, and combining the three-dimensional point cloud map with the target object informationto obtain the three-dimensional point cloud map containing the target object information. According to the method, the real-time performance and accuracy of three-dimensional point cloud map construction are improved, and rich information is contained. In addition, the invention further provides an unmanned aerial vehicle three-dimensional map construction device, computer equipment and a storagemedium.
Owner:SHENZHEN INST OF ADVANCED TECH CHINESE ACAD OF SCI

Method for automatically tagging animation scenes for matching through comprehensively utilizing overall color feature and local invariant features

The invention discloses a method for automatically tagging animation scenes for matching through comprehensively utilizing an overall color feature and local invariant features, which aims to improve the tagging accuracy and tagging speed of animation scenes through comprehensively utilizing overall color features and color-invariant-based local invariant features. The technical scheme is as follows: preprocessing a target image (namely, an image to be tagged), calculating an overall color similarity between the target image and images in an animation scene image library, and carrying out color feature filtering on the obtained result; after color feature filtering, extracting a matching image result and the colored scale invariant feature transform (CSIFT) feature of the target image, and calculating an overall color similarity and local color similarities between the matching image result and the CSIFT feature; fusing the overall color similarity and the local color similarities so as to obtain a final total similarity; and carrying out text processing and combination on the tagging information of the images in the matching result so as to obtain the final tagging information of the target image. By using the method provided by the invention, the matching accuracy and matching speed of an animation scene can be improved.
Owner:NAT UNIV OF DEFENSE TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products