Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

200 results about "Rigid transformation" patented technology

In mathematics, a rigid transformation (also called Euclidean transformation or Euclidean isometry) is a geometric transformation of a Euclidean space that preserves the Euclidean distance between every pair of points.

Automatic registration method for three-dimensional point cloud data

The invention discloses an automatic registration method for three-dimensional point cloud data. The method comprises the steps that two point clouds to be registered are sampled to obtain feature points, rotation invariant feature factors of the feature points are calculated, and the rotation invariant feature factors of the feature points in the two point clouds are subjected to matching search to obtain an initial corresponding relation between the feature points; then, a random sample consensus algorithm is adopted to judge and remove mismatching points existing in an initial matching point set to obtain an optimized feature point corresponding relation, and a rough rigid transformation relation between the two point clouds is obtained through calculation to realize rough registration; a rigid transformation consistency detection algorithm is provided, a local coordinate system transformation relation between the matching feature points is utilized to perform binding detection on the rough registration result, and verification of the correctness of the rough registration result is completed; and an ICP algorithm is adopted to optimize the rigid transformation relation between the point cloud data to realize automatic precise registration of the point clouds finally.
Owner:HUAZHONG UNIV OF SCI & TECH

Fast 3D-2D image registration method with application to continuously guided endoscopy

A novel framework for fast and continuous registration between two imaging modalities is disclosed. The approach makes it possible to completely determine the rigid transformation between multiple sources at real-time or near real-time frame-rates in order to localize the cameras and register the two sources. A disclosed example includes computing or capturing a set of reference images within a known environment, complete with corresponding depth maps and image gradients. The collection of these images and depth maps constitutes the reference source. The second source is a real-time or near-real time source which may include a live video feed. Given one frame from this video feed, and starting from an initial guess of viewpoint, the real-time video frame is warped to the nearest viewing site of the reference source. An image difference is computed between the warped video frame and the reference image. The viewpoint is updated via a Gauss-Newton parameter update and certain of the steps are repeated for each frame until the viewpoint converges or the next video frame becomes available. The final viewpoint gives an estimate of the relative rotation and translation between the camera at that particular video frame and the reference source. The invention has far-reaching applications, particularly in the field of assisted endoscopy, including bronchoscopy and colonoscopy. Other applications include aerial and ground-based navigation.
Owner:PENN STATE RES FOUND

Early tumor positioning and tracking method based on multi-mold sensitivity intensifying and imaging fusion

An early stage tumor localizing tracking based on the multimode sensitization imaging fuse, belongs to the medical image processing field. The invention includes: a medical image before the operation for obtaining the tumour aim focus imaging sensitization; an ultrasound sensitization image during the operation for obtaining the tumour aim focus imaging sensitization. When the image is processed the guide therapy, using the global rigid transformation and the local nonstiff transformation combination round the tumour aim focus as the geometric transformation model with deformation registration, the sensitization images before and during the operation are processed with the deformation registration based on the union marked region, while the images before and during the operation are fused, to rebuild the three-dimensional visualization model in the tumor focus region. Using the above deformation registration method to complete the sport deformation compensation for the imaged before the operation, the target tracking of the tumour target focus is further automatically completed. The invention can be used in a plurality of places, such as the early diagnosis of the tumour, the image guide tumour early intervention, the image guide minimal invasive operation, the image guide physiotherapy etc.
Owner:SHANGHAI JIAO TONG UNIV

Camera tracking method and device

Provided are a camera tracking method and device, which use a binocular video image to perform camera tracking, thereby improving the tracking accuracy. The camera tracking method provided in the embodiments of the present invention comprises: acquiring an image set of a current frame; respectively extracting feature points of each image in the image set of the current frame; according to a principle that depths of scene in adjacent regions on an image are similar, acquiring a matched feature point set of the image set of the current frame; according to an attribute parameter and a pre-set model of a binocular camera, respectively estimating three-dimensional positions of scene points corresponding to each pair of matched feature points in a local coordinate system of the current frame and a local coordinate system of the next frame; and according to the three-dimensional positions of the scene points corresponding to the matched feature points in the local coordinate system of the current frame and the local coordinate system of the next frame, estimating a motion parameter of the binocular camera in the next frame using the invariance of a barycentric coordinate with respect to rigid transformation, and optimizing the motion parameter of the binocular camera in the next frame.
Owner:HUAWEI TECH CO LTD

Fast 3d-2d image registration method with application to continuously guided endoscopy

A novel framework for fast and continuous registration between two imaging modalities is disclosed. The approach makes it possible to completely determine the rigid transformation between multiple sources at real-time or near real-time frame-rates in order to localize the cameras and register the two sources. A disclosed example includes computing or capturing a set of reference images within a known environment, complete with corresponding depth maps and image gradients. The collection of these images and depth maps constitutes the reference source. The second source is a real-time or near-real time source which may include a live video feed. Given one frame from this video feed, and starting from an initial guess of viewpoint, the real-time video frame is warped to the nearest viewing site of the reference source. An image difference is computed between the warped video frame and the reference image. The viewpoint is updated via a Gauss-Newton parameter update and certain of the steps are repeated for each frame until the viewpoint converges or the next video frame becomes available. The final viewpoint gives an estimate of the relative rotation and translation between the camera at that particular video frame and the reference source. The invention has far-reaching applications, particularly in the field of assisted endoscopy, including bronchoscopy and colonoscopy. Other applications include aerial and ground-based navigation.
Owner:PENN STATE RES FOUND

A battery appearance defect detection method based on dimension reduction and point cloud data matching

The invention discloses a battery appearance defect detection method based on dimension reduction and point cloud data matching, belonging to the technical field of machine vision detection, which obtains three-dimensional point cloud data of a battery to be detected and reduces the dimension of the point cloud data. Obtaining a defect area of the battery to be detected, and extracting point clouddata of the defect area; Extracting the point cloud data of the same area of the standard battery; Sampling two pieces of point cloud data and matching searching; acquiring The optimized correspondence of feature points and calculating roughly rigid transformation relation to realize rough registration. Performing Constraint detection on the rough registration results, and verifying correctness.Optimizing the rigid transformation relationship between the point cloud data to achieve automatic and accurate registration to determine whether the battery appearance is qualified or not. Changing A3D imagen into a 2D image by a dimension reduction algorithm, and acquiring a point cloud data of a defect area by using a plane defect detection technology in that 2D image so that the point cloud data is matched, the detection range is narrowed, the running time is reduce, and the accuracy is improved.
Owner:JIANGSU UNIV OF TECH

Point cloud registration method based on feature extraction

The invention belongs to the field of three-dimensional measurement, and particularly discloses a point cloud registration method based on feature extraction. The point cloud registration method comprises the steps: firstly calculating a feature index of each point through the maximum principal curvature and the minimum principal curvature of each point in a reference point cloud and a target point cloud; determining a neighborhood point of each point according to a preset number of neighborhood points, and obtaining feature points in the reference point cloud and the target point cloud according to the relationship between the feature indexes of the points and the feature indexes of the neighborhood points; constructing a local reference coordinate system for each feature point; and further obtaining three-dimensional local features of each feature point, matching the feature points in the reference point cloud and the target point cloud according to the three-dimensional local features to obtain multiple pairs of corresponding feature points, obtaining a three-dimensional rigid transformation matrix from the reference point cloud to the target point cloud according to the relationship between the corresponding feature points, and completing point cloud registration. According to the point cloud registration method, the influence of noise in the point cloud, isolated points, local point cloud density non-uniformity and the like on point cloud registration can be reduced, so that the point cloud registration result is accurate.
Owner:HUAZHONG UNIV OF SCI & TECH

Three-dimensional point cloud full-automatic registration method

ActiveCN105654483ARealize fully automatic registrationImage analysisDetails involving 3D image dataPoint cloudConfidence factor
The invention discloses a three-dimensional point cloud full-automatic registration method. The method includes the following steps that: two groups of point cloud data A and B are inputted, the normal directions and boundaries of the two groups of point cloud data A and B are calculated, and the data are simplified, and boundary points are removed; three-dimensional feature processing is performed on the pre-processed point cloud data A and B, so that corresponding three-dimensional feature descriptors Key A and Key B are obtained; as for each datum in the Key A, a plurality of points which are nearest to each datum in the Key are searched in the Key B, and are adopted as preliminary corresponding points, and corresponding points which do not satisfy a predetermined condition are removed from the preliminary corresponding points, so that a final candidate point set can be obtained; rigid transformation matrixes are calculated for each group of candidate point pair so as to form a candidate matrix set; and confidence factors are calculated for each candidate matrix according to the candidate matrix set, a candidate matrix with a maximum confidence factor is selected as a final rigid transformation matrix, and source point cloud is transformed to a coordinate system of target point cloud through the rigid transformation matrix.
Owner:WISESOFT CO LTD

Single-lens three-dimensional image reconstruction method based on laser radar point cloud data assistance

The invention discloses a single-lens three-dimensional image reconstruction method based on laser radar point cloud data assistance. The method comprises the steps: acquiring multiple visible light images and laser radar point cloud data of a scanned object; based on the laser radar point cloud data, adopting an incremental motion recovery structure algorithm to carry out three-dimensional reconstruction on the visible light image so as to acquire an image point cloud; and registering the image point cloud and the laser radar point cloud data by adopting a mode of combining rigid transformation and non-rigid transformation to obtain a three-dimensional point cloud model of a scanned object. According to the method, geometric correction is carried out by adding virtual ground control points in the three-dimensional reconstruction process of the image in a mode of searching homonymous points of the image and the laser radar point cloud, so that the distortion condition in the three-dimensional reconstruction process of the image can be reduced; and image point cloud and laser radar point cloud data are registered in a rigid transformation and non-rigid transformation combined mode,the registration precision can be improved, and high-precision three-dimensional point cloud model reconstruction is achieved.
Owner:HUNAN SHENGDING TECH DEV CO LTD

Calibration-target-free universal hand-eye calibration method based on 3D vision

InactiveCN110450163AMeet the requirements of fine operationReduce errorProgramme-controlled manipulatorHand eye calibrationNonlinear matrix equation
The invention discloses a calibration-target-free universal hand-eye calibration method based on 3D vision. The calibration-target-free universal hand-eye calibration method based on 3D vision is common for the eye-to-hand condition and the eye-in-hand condition. The calibration-target-free universal hand-eye calibration method based on 3D vision comprises the steps that firstly, the center position of a flange plate of an end executor of a mechanical arm is kept constant, the end executor is controlled to only rotate, and a 3D vision sensor is utilized to collect coordinates of at least fourfeature points F for sphere center fitting; then, the posture of the end executor of the mechanical arm is kept constant, the end executor is controlled to only conduct horizontal movement, the coordinates of at least three feature points F are collected by a 3D camera, and a controller of a robot is used for recording or calculating the corresponding center position of the flange plate so as to estimate the rigid transformation parameter. The calibration-target-free universal hand-eye calibration method based on 3D vision has the beneficial effects that space information of the 3D vision sensor are fully utilized, a high error generated when the posture of the calibration target is measured is avoided, it is not needed to solve a complicated high-dimensional nonlinear matrix equation, andtherefore the calibration precision and calibration efficiency are high.
Owner:SHANGHAI RO INTELLIGENT SYST

Multi-view ISAR image fusion method

The invention discloses a multi-view ISAR image fusion method, which mainly solves the problems of redundancy of feature points extracted by the prior art, complexity of processing and large operationamount. The scheme is as follows: a series of N ISAR images are segmented by superpixel simple linear iterative clustering to obtain superpixel coordinates X, Y and brightness information L; a brightness threshold is set, and the super pixel information whose L is larger than the threshold is retained; the first ISAR image is selected as the reference image, and the rigid transformation relationship between the nth ISAR image and the reference image is established by using the reserved parameters, and the transformation matrix Bn is obtained; a cost function Jn between the nth ISAR map and the reference map is set; the rigid transformation matrix Bn' which minimizes Jn is solved, and the inverse matrix An of Bn' is obtained; the n-th ISAR map is transformed into the reference map coordinate system according to the inverse matrix An, and all the transformed ISAR maps and the reference maps are superposed to obtain the fusion map. The feature points extracted by the invention are refined, and the operation amount is small, and can be used for three-dimensional image reconstruction, target recognition and attitude estimation.
Owner:XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products