Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

1183 results about "Rotation matrix" patented technology

In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. rotates points in the xy-plane counterclockwise through an angle θ about the origin of a two-dimensional Cartesian coordinate system.

Guidance method based on 3D-2D pose estimation and 3D-CT registration with application to live bronchoscopy

A method provides guidance to the physician during a live bronchoscopy or other endoscopic procedures. The 3D motion of the bronchoscope is estimated using a fast coarse tracking step followed by a fine registration step. The tracking is based on finding a set of corresponding feature points across a plurality of consecutive bronchoscopic video frames, then estimating for the new pose of the bronchoscope. In the preferred embodiment the pose estimation is based on linearization of the rotation matrix. By giving a set of corresponding points across the current bronchoscopic video image, and the CT-based virtual image as an input, the same method can also be used for manual registration. The fine registration step is preferably a gradient-based Gauss-Newton method that maximizes the correlation between the bronchoscopic video image and the CT-based virtual image. The continuous guidance is provided by estimating the 3D motion of the bronchoscope in a loop. Since depth-map information is available, tracking can be done by solving a 3D-2D pose estimation problem. A 3D-2D pose estimation problem is more constrained than a 2D-2D pose estimation problem and does not suffer from the limitations associated with computing an essential matrix. The use of correlation-based cost, instead of mutual information as a registration cost, makes it simpler to use gradient-based methods for registration.
Owner:PENN STATE RES FOUND

Fusion calibration method of three-dimensional laser radar and binocular visible light sensor

The invention discloses a fusion calibration method of a three-dimensional laser radar and a binocular visible light sensor. According to the invention, the laser radar and the binocular visible lightsensor are used to obtain the three-dimensional coordinates of the plane vertex of the square calibration plate, and then registration is carried out to obtain the conversion relation of the two coordinate systems. In the calibration process, an RANSAC algorithm is adopted to carry out plane fitting on the point cloud of the calibration plate, and the point cloud is projected to a fitting plane,so that the influence of measurement errors on vertex coordinate calculation is reduced. For the binocular camera, the vertex of the calibration plate is obtained by adopting an angular point diagonalfitting method; for the laser radar, a distance difference statistical method is adopted to judge boundary points of the point cloud on the calibration board. By utilizing the obtained vertex coordinates of the calibration plate, fusion calibration can be accurately carried out on the three-dimensional laser radar and the binocular visible light sensor, a rotation matrix and a translation vectorof coordinate systems of the three-dimensional laser radar and the binocular visible light sensor are obtained, and a foundation is laid for realizing data fusion of three-dimensional point cloud anda two-dimensional visible light image.
Owner:BEIHANG UNIV

Multi-camera system calibrating method based on optical imaging test head and visual graph structure

The invention provides a multi-camera system calibrating method based on an optical imaging test head and a visual graph structure. The method comprises the following steps: independently calibrating each camera by the optical imaging test head to obtain the initial values of the internal parameter and aberration parameter of each camera; calibrating the multiple cameras two by two, and obtaining the fundamental matrix, polar constraint, rotation matrix and translation vector between every two cameras with a plurality of overlapped regions at a view field by means of linear estimation; building the connection relationship among the multiple cameras according to the graph theory and the visual graph structure, and estimating the rotation vector quantity initial value and translation vector quantity initial value of each camera relative to the referred cameras by a shortest path method; and optimally estimating all the internal parameters and external parameters of the all cameras and the acquired three-dimensional sign point set of the optical imaging test head by a sparse bundling and adjusting algorithm to obtain a high-precision calibrating result. The multi-camera system calibrating method is simple in a calibrating process from the partial situation to the overall situation and from the robust to the precise, ensures high-precise and robust calibration, and is applied to calibrating multi-camera systems with different measurement ranges and different distribution structures.
Owner:SUZHOU DEKA TESTING TECH CO LTD

Method and system for calibrating external parameters based on camera and three-dimensional laser radar

The invention discloses a method for calibrating external parameters based on a camera and a three-dimensional laser radar, which comprises the following steps: according to a covariance of measurement errors in the perpendicular line direction from a three-dimensional laser radar coordinate system origin to different position target planes and a covariance of measurement errors in a conversion relationship from a three-dimensional laser radar coordinate system to a camera coordinate system in the perpendicular line direction, acquiring an equation of a quadratic sum of the variance including the variance of the camera measurement noise and the variance of the three-dimensional laser radar measurement noise in the covariance of the measurement errors in the conversion relationship; and calibrating a rotation matrix with maximum likelihood estimate by using the reciprocal of the quadratic sum of the variances of all obtained measurement noise as a weighting coefficient. The invention also discloses a system for calibrating the external parameters based on the camera and the three-dimensional laser radar at the same time. The effect of the measurement errors on the rotation matrix to be calibrated is taken into consideration during calibrating, and the algorithm of the maximum likelihood estimate is adopted for the measurement errors in the calibrated result of the rotation matrix, so the calibrating result is more accurate.
Owner:BEIJING INSTITUTE OF TECHNOLOGYGY

Combined calibration method for multiple sensors of mobile robot

The application of the invention discloses a combined calibration method for multiple sensors of a mobile robot. The mobile robot comprises a 2D laser radar and a camera. The method comprises the following steps of: S1, performing internal parameter calibration on the camera by use of a pinhole cameramodel, so as to obtain an internal parameter matrix, which is as shown in the description, of the camera; S2, placing the camera and the 2D laser radar on fixed positions of the mobile robot, and keeping the 2D laser radar and the camera constant in a movement process of the mobile robot; S3, acquiring the position of the camera in a world coordinate system at a moment ti, wherein the position of the camera is as shown in the description, and I is a positive integer; S4, acquiring the position of the 2D laser radar in a world coordinate system at the moment ti, wherein the position of the 2D laser radar is as shown in the description; S5, repeating the S3 and S4 until i is not less than 4; and S6, obtaining a rotation matrix Rcl and a translation matrix tcl of the camera and the 2D laser radar according to an equation as shown in the description. The technical scheme of the application of the invention completely breaks away from the restriction of a calibration target, calibration can be performed under various environments, real-time calibration can be performed in a use process, the problem that positioning error is caused since parameters of existing mobile robots are calibrated only before leaving factory and the parameters fluctuate in the later stage, and users can perform calibration conveniently in the use process.
Owner:SHEN ZHEN 3IROBOTICS CO LTD

Surgical navigation method and system

A surgical navigation method comprises collecting the three-dimensional images of an organ through nuclear magnetic resonance or CT (computed tomography) before an operation; receiving a planned puncture trace input by a user and displaying the planned puncture trace in the three-dimensional images; collecting the three-dimensional images of the organ through an ultrasonic device during the operation, and converting the three-dimensional images into an operating room coordinate system through a tracer; registering the three-dimensional images before the operation with the three-dimensional images during the operation to obtain final displacement vectors and rotation matrixes; according to the final displacement vectors and the rotation matrixes, converting the three-dimensional images and the planned puncture trace into the operating room coordinate system for displaying; obtaining the motion trace of a puncture needle utilized during the operation through the tracer and displaying the motion trace, the three-dimensional images before the operation and the planned puncture trace in the operating room coordinate system synchronously. The invention also provides a corresponding surgical navigation system. According to the surgical navigation system, blurred images during the operation are replaced by clear images before the operation to provide great help to aspects such as operation, time and image clearness.
Owner:珠海中科先进技术研究院有限公司

Indoor three-dimensional scene reconstruction method employing plane characteristics

The invention provides an indoor three-dimensional scene reconstruction method employing plane features, and the method comprises the steps: obtaining an RGB image and a depth image of an indoor scene in real time, and completing the reconstruction of a single-frame three-dimensional point cloud; carrying out the feature extraction of two adjacent RGB images, and obtaining the initial rotating matrixes of the two adjacent three-dimensional point clouds; carrying out the downsampling of each three-dimensional point cloud, and extracting the plane features of the indoor scene from each three-dimensional point cloud; determining each plane position; calculating an error rotating matrix; correcting the initial rotating matrixes, and carrying out the jointing and registering of each two three-dimensional point clouds; and finally achieving the reconstruction of the indoor three-dimensional scene through the jointing and registering of each three-dimensional point cloud. The method carries out the error elimination through employing the geometric features of the point clouds, and extracts the plane features of the point clouds quickly and effectively. The success rate of the matching of the plane features of the current and former point clouds is higher. According to the plane features, the method judges the type of the planes, calculates the error matrix, corrects the initial rotating matrix, and obtains a more accurate indoor three-dimensional point cloud map.
Owner:NORTHEASTERN UNIV

Mechanical arm tail end camera hand-eye calibration method and system

InactiveCN109658460ASolve the problem of hand-eye relationship calibrationProgramme-controlled manipulatorImage analysisHand eye calibrationManipulator
The embodiment of the invention provides a mechanical arm tail end camera hand-eye calibration method and system, and the method comprises the steps: obtaining a calibration image collected by a calibration area when a mechanical arm performs translation and rotation for multiple times at the same time; Obtaining a hand-eye calibration matrix based on the mechanical arm pose information and the calibration image external parameter information; And calibrating the hand-eye relationship of the tail end of the mechanical arm by using the hand-eye calibration matrix. According to the scheme, the calibration of the camera is completed by utilizing spatial motion and image acquisition at a plurality of different positions; According to the principle that the hand-eye relationship is constrainedby multiple spatial relative positions, only the positions of at least three spatial positions relative to the base and external parameters of the camera need to be obtained in the calibration process, and the calibration process can be universal for different mechanical arm models, the number of freedom degrees and camera models; According to the scheme, translation and rotation matrix transformation is utilized, the mechanical arm tail end coordinates are projected to the pixel coordinates, and calibration of the hand-eye relation between the camera and the mechanical arm tail end when the mechanical arm tail end has rotation and translation movement at the same time is obtained.
Owner:BEIJING INST OF RADIO MEASUREMENT

Guidance method based on 3D-2D pose estimation and 3D-CT registration with application to live bronchoscopy

A method provides guidance to the physician during a live bronchoscopy or other endoscopic procedures. The 3D motion of the bronchoscope is estimated using a fast coarse tracking step followed by a fine registration step. The tracking is based on finding a set of corresponding feature points across a plurality of consecutive bronchoscopic video frames, then estimating for the new pose of the bronchoscope. In the preferred embodiment the pose estimation is based on linearization of the rotation matrix. By giving a set of corresponding points across the current bronchoscopic video image, and the CT-based virtual image as an input, the same method can also be used for manual registration. The fine registration step is preferably a gradient-based Gauss-Newton method that maximizes the correlation between the bronchoscopic video image and the CT-based virtual image. The continuous guidance is provided by estimating the 3D motion of the bronchoscope in a loop. Since depth-map information is available, tracking can be done by solving a 3D-2D pose estimation problem. A 3D-2D pose estimation problem is more constrained than a 2D-2D pose estimation problem and does not suffer from the limitations associated with computing an essential matrix. The use of correlation-based cost, instead of mutual information as a registration cost, makes it simpler to use gradient-based methods for registration.
Owner:PENN STATE RES FOUND

Device and method of measuring surface topographies of mirror and mirror-like objects

The invention discloses a method and a device for measuring surface topographies of mirror and mirror-like objects. Phase measurement deflectometry is adopted to measure mirror and mirror-like surface shapes, a combination between a liquid crystal display and a planar mirror serves as a calibration plate, the liquid crystal display is fixed and can not move, the planar mirror moves freely for four times, an image reflected by the planar mirror is photographed by a CCD detector, linear solution and beam method adjustment are then used for completing calibration on inner parameters of the camera, global pose estimation is used for completing calibration on the relative relation between the liquid crystal display and the camera, and finally, a three-dimensional topography of a to-be-detected mirror surface is calculated and obtained through a gradient integral of the phase measurement deflectometry. According to the device and the method of the invention, defects that the calibration plate is needed and a precise positioning control point is attached to the planer mirror during the calibration process in the traditional method are overcome, the measurement cost is low, and he measurement speed is quick; and constraint conditions such as rotation matrix orthogonality during a perspective imaging process and a fourier transform method are introduced for corresponding point matching, and influences by high noise and multiframe processing on three-dimensional topography recovery can be overcome.
Owner:TSINGHUA UNIV

Calibration method of camera and inertial sensor integrated positioning and attitude determining system

InactiveCN102162738AAvoid System Calibration ErrorsHigh precisionImage analysisMeasurement devicesIntrinsicsComputer vision
The invention provides a calibration method of a camera and inertial sensor integrated positioning and attitude determining system. The method comprises the following steps: calibrating the intrinsic matrix of the camera; shooting a plurality of images of a calibration object with known dimensions from different angles, and recording the roll angle and the pitch angle output by the inertial sensor when each image is shot; defining a world coordinate system, a camera coordinate system, an inertial sensor coordinate system and a geomagnetic coordinate system; calculating the rotation matrix from the world coordinate system to the camera coordinate system at the moment based on the image information and spatial information of the calibration object in each image; integrating the shot images pairwise, establishing an equation set with respect to the rotation matrix from the inertial sensor coordinate system to the camera coordinate system for each group, and solving the equation set to calculate the rotation matrix from the inertial sensor coordinate system to the camera coordinate system; and establishing an equation set with respect to the rotation matrix from the geomagnetic coordinate system to the world coordinate system for each image, and solving the equation set to calculate the rotation matrix from the geomagnetic coordinate system to the world coordinate system.
Owner:INST OF AUTOMATION CHINESE ACAD OF SCI

Method for establishing relation between scene stereoscopic depth and vision difference in binocular stereoscopic vision system

The invention discloses a method for establishing a relation between a scene stereoscopic depth and a vision difference in a binocular stereoscopic vision system. the method comprises the steps of firstly, solving inner parameters, and relative rotary matrixes and translation vectors of left and right cameras; then, analyzing a main error source and an error model of the binocular stereoscopic vision system; then, analyzing influences of the main error on a base line length and the vision difference of the parallel binocular stereoscopic vision system; then, establishing a common relation model of the scene stereoscopic depth and the vision difference in the binocular stereoscopic vision system; obtaining depth information by a laser distance measuring instrument by selecting a certain amount of demarcation points and carrying out demarcation based on a least square method to solve a relation model between the scene stereoscopic depth and the vision difference in the binocular stereoscopic vision system; and finally, solving the vision difference in left and right images through a corresponding fixed matching method so as to realize accurate recovery and three-dimensional reestablishment of the scene stereoscopic depth. The method has the beneficial effect of directly, simply, accurately and extremely improving the accuracy of the stereoscopic depth recovery and the three-dimensional reestablishment.
Owner:NANJING UNIV OF AERONAUTICS & ASTRONAUTICS
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products