Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

135 results about "Essential matrix" patented technology

In computer vision, the essential matrix is a 3×3 matrix, 𝐄, with some additional properties described below, which relates corresponding points in stereo images assuming that the cameras satisfy the pinhole camera model.

Guidance method based on 3D-2D pose estimation and 3D-CT registration with application to live bronchoscopy

A method provides guidance to the physician during a live bronchoscopy or other endoscopic procedures. The 3D motion of the bronchoscope is estimated using a fast coarse tracking step followed by a fine registration step. The tracking is based on finding a set of corresponding feature points across a plurality of consecutive bronchoscopic video frames, then estimating for the new pose of the bronchoscope. In the preferred embodiment the pose estimation is based on linearization of the rotation matrix. By giving a set of corresponding points across the current bronchoscopic video image, and the CT-based virtual image as an input, the same method can also be used for manual registration. The fine registration step is preferably a gradient-based Gauss-Newton method that maximizes the correlation between the bronchoscopic video image and the CT-based virtual image. The continuous guidance is provided by estimating the 3D motion of the bronchoscope in a loop. Since depth-map information is available, tracking can be done by solving a 3D-2D pose estimation problem. A 3D-2D pose estimation problem is more constrained than a 2D-2D pose estimation problem and does not suffer from the limitations associated with computing an essential matrix. The use of correlation-based cost, instead of mutual information as a registration cost, makes it simpler to use gradient-based methods for registration.
Owner:PENN STATE RES FOUND

Dynamic target position and attitude measurement method based on monocular vision at tail end of mechanical arm

ActiveCN103759716ASimplified Calculation Process MethodOvercome deficienciesPicture interpretationEssential matrixFeature point matching
The invention relates to a dynamic target position and attitude measurement method based on monocular vision at the tail end of a mechanical arm and belongs to the field of vision measurement. The method comprises the following steps: firstly calibrating with a video camera and calibrating with hands and eyes; then shooting two pictures with the video camera, extracting spatial feature points in target areas in the pictures by utilizing a scale-invariant feature extraction method and matching the feature points; resolving a fundamental matrix between the two pictures by utilizing an epipolar geometry constraint method to obtain an essential matrix, and further resolving a rotation transformation matrix and a displacement transformation matrix of the video camera; then performing three-dimensional reconstruction and scale correction on the feature points; and finally constructing a target coordinate system by utilizing the feature points after reconstruction so as to obtain the position and the attitude of a target relative to the video camera. According to the method provided by the invention, the monocular vision is adopted, the calculation process is simplified, the calibration with the hands and the eyes is used, and the elimination of error solutions in the measurement process of the position and the attitude of the video camera can be simplified. The method is suitable for measuring the relative positions and attitudes of stationery targets and low-dynamic targets.
Owner:TSINGHUA UNIV

Multi-camera system calibrating method based on optical imaging test head and visual graph structure

The invention provides a multi-camera system calibrating method based on an optical imaging test head and a visual graph structure. The method comprises the following steps: independently calibrating each camera by the optical imaging test head to obtain the initial values of the internal parameter and aberration parameter of each camera; calibrating the multiple cameras two by two, and obtaining the fundamental matrix, polar constraint, rotation matrix and translation vector between every two cameras with a plurality of overlapped regions at a view field by means of linear estimation; building the connection relationship among the multiple cameras according to the graph theory and the visual graph structure, and estimating the rotation vector quantity initial value and translation vector quantity initial value of each camera relative to the referred cameras by a shortest path method; and optimally estimating all the internal parameters and external parameters of the all cameras and the acquired three-dimensional sign point set of the optical imaging test head by a sparse bundling and adjusting algorithm to obtain a high-precision calibrating result. The multi-camera system calibrating method is simple in a calibrating process from the partial situation to the overall situation and from the robust to the precise, ensures high-precise and robust calibration, and is applied to calibrating multi-camera systems with different measurement ranges and different distribution structures.
Owner:SUZHOU DEKA TESTING TECH CO LTD

Reconstruction method and system for processing three-dimensional point cloud containing main plane scene

The invention proposes a reconstruction method and system for processing three-dimensional point cloud containing a main plane scene. The method comprises the following steps of obtaining a multi-angle image of a static scene by using a camera with known internal parameters; detecting characteristic points of the image, and matching characteristic points of any two images to obtain a matched point pairs and obtaining a matched point sequence by projecting the same scene point; for image pairs containing the preset number of matched point pairs, obtaining a basic array between the image pairs according to the matched points, and storing corresponding space plane point sets; determining the corresponding position relationship between the image pairs according to the basic array; realizing camera fusion and three-dimensional point reconstruction in a standard coordinate frame according to the corresponding position relationship between the image pairs; and optimizing the reconstruction result of the three-dimensional point cloud. The reconstruction method for processing three-dimensional point cloud containing main plane scene of the invention can overcome defects of the existing reconstruction method for processing three-dimensional point cloud and can realize the three-dimensional reconstruction not depending on the scene.
Owner:TSINGHUA UNIV

Guidance method based on 3D-2D pose estimation and 3D-CT registration with application to live bronchoscopy

A method provides guidance to the physician during a live bronchoscopy or other endoscopic procedures. The 3D motion of the bronchoscope is estimated using a fast coarse tracking step followed by a fine registration step. The tracking is based on finding a set of corresponding feature points across a plurality of consecutive bronchoscopic video frames, then estimating for the new pose of the bronchoscope. In the preferred embodiment the pose estimation is based on linearization of the rotation matrix. By giving a set of corresponding points across the current bronchoscopic video image, and the CT-based virtual image as an input, the same method can also be used for manual registration. The fine registration step is preferably a gradient-based Gauss-Newton method that maximizes the correlation between the bronchoscopic video image and the CT-based virtual image. The continuous guidance is provided by estimating the 3D motion of the bronchoscope in a loop. Since depth-map information is available, tracking can be done by solving a 3D-2D pose estimation problem. A 3D-2D pose estimation problem is more constrained than a 2D-2D pose estimation problem and does not suffer from the limitations associated with computing an essential matrix. The use of correlation-based cost, instead of mutual information as a registration cost, makes it simpler to use gradient-based methods for registration.
Owner:PENN STATE RES FOUND

Random trihedron-based radar-camera system external parameter calibration method

InactiveCN103049912ALow noise immunityLow operational complexity requirementsImage analysisEssential matrixRadar systems
The invention discloses a random trihedron-based radar-camera system external parameter calibration method. According to the method, an external parameter of a system can be solved with only two frames of data by using a trihedron scene a natural environment. The method comprises the following steps of: stipulating a world coordinate system by using a trihedron; performing planar fitting on the trihedron observed in a radar system to obtain a parameter of each plane and solving conversion relation of the world coordinate system and the radar coordinate system and relative motion between the two frames of data under the radar coordinate system; and in a camera system, solving an essential matrix by using matching characteristic points extracted by front and rear frames, then solving relation motion under the camera coordinate system, solving the plane parameter under the camera coordinate system by using the parameter under the radar coordinate system, finally solving an external parameter of a radar-camera, and performing final optimization by using coplanarity of points on corresponding planes under the two coordinate systems. A scene required by the method is simpler; and the method has the characteristics of high anti-interference performance, simple experimental equipment and high flexibility.
Owner:ZHEJIANG UNIV

Large-scale part three-dimensional reconstruction method based on image sequence

ActiveCN111815757AImprove matching speedAddresses issues prone to error conditionsImage enhancementImage analysisPattern recognitionEssential matrix
The large part three-dimensional reconstruction method based on the image sequence comprises the following steps that S1, an unmanned aerial vehicle carrying a camera flies around a target part, and ato-be-reconstructed image sequence is obtained; s2, adopting an SIFT algorithm and an SURF algorithm to jointly extract image feature points; s3, estimating camera motion by calculating an essentialmatrix and a basic matrix based on the sparse feature points obtained by the SIFT corner points and the SURF corner points, and registering three-dimensional space points to obtain sparse point cloudof a three-dimensional scene; s4, judging whether the optimized sparse point cloud has a symmetrical repeated structure or not; and S5, taking the sparse point cloud as seed point and reference imageinput, and performing dense reconstruction based on a dense three-dimensional point construction method of multiple views to obtain a low-resolution depth map. The three-dimensional point recovery andcorrection method based on the image sequence has the advantages that the three-dimensional point recovery and correction method based on the image sequence is provided, and construction from the image sequence to the space sparse three-dimensional points is achieved.
Owner:SHANDONG IND TECH RES INST OF ZHEJIANG UNIV

Step-by-step calibration method for camera parameters of binocular stereoscopic vision system

ActiveCN106981083AReal-time calibrationApplicable calibration requirementsImage analysisDimension measurementElectric control
The invention relates to a step-by-step calibration method for the camera parameters of a binocular stereoscopic vision system and belongs to the field of image processing and computer vision detection, which relates to a step-by-step calibration method for the camera intrinsic and external parameters of a dimension measurement system for large-scale forgings. According to the calibration method, firstly, the intrinsic parameter matrix of a camera is calibrated in the off-line manner in a laboratory, and the camera is driven to conduct two sets of mutually independent triorthogonal motions by a high-precision electric control platform. Secondly, based on the properties of FOD points, the intrinsic parameters of the camera are listed through the unique solution of a linear equation. At a forging experiment site, a basic matrix between two images is figured out through the 8-point method, and the method of decomposing of an essential matrix is conducted. In this way, the real-time on-line calibration for the external parameters of the camera is realized. Finally, based on the image information, the length of a high-precision three-dimensional scale is reconstructed, so that the solution of a camera scale factor is realized. The above method is simple and convenient in calibration process, short in calibration time and high in precision. The calibration of the camera of the binocular vision measurement system at the forging site can be precisely realized by adopting fewer images.
Owner:DALIAN UNIV OF TECH

Three-dimensional reconstruction method based on fringe photograph collection of same scene

A 3D reconstruction method based on scattered photo sets of the same scene is divided into three stages: the first stage: every pairwsie image feature matching and relative camera motion are estimated; and the stage is divided into 4 steps: (1) every two images are subject to the bidirectional nearest neighbor search and feature domain constraint to obtain a candidate correspondence; (2) the candidate correspondence is subject to parallax domain correspondence constraint to obtain a hypothesis correspondence; (3) the image coordinates of the hypothesis correspondence are standardized to solve an essential matrix estimation meeting the hypothesis correspondence; (4) the essential matrix is decomposed to obtain four groups of possible solutions of the camera motion, and the final solution is determined by the fault-tolerant forward depth constraint; the second stage: the optimized initial reconstruction camera pair is selected according to the results of the first stage, the standard sparse reconstruction method is applied, and the camera pose and the sparse geometric information of the scene are restored; the third stage: selective accurate and dense matching is carried out based on the results of second stage, and an accurate and dense 3D scene point cloud model is reconstructed by the triangulation method. The method has the advantages of obtaining reliable camera pose and high-density scene geometric information, greatly shortening the reconstruction time, having relatively high reconstruction efficiency, and is applicable to processing the scattered photo set with large data size.
Owner:BEIHANG UNIV

Monocular real-time three-dimensional reconstruction method based on loop testing

The invention relates to a monocular real-time three-dimensional reconstruction method based on loop testing, and belongs to the technical field of three-dimensional reconstruction. The method comprises: carrying out pairwise matching in an image sequence of a specified scene on the basis of an image feature point matching theory to obtain image matching point pairs; solving an essential matrix, and then utilizing a singular-value decomposition theory to acquire an initial pose; utilizing the initial pose or a previous-frame pose to obtain an estimated pose through a pose tracking model; judging whether a current frame is a key frame; then utilizing a random fern algorithm to calculate similarity of the current frame and the key frame, and if the similarity reaches a threshold value, it isconsidered that a loop is formed; utilizing a pose of the key frame to optimize the current pose if the loop is formed; utilizing the above-obtained pose to obtain a point cloud, and fusing the sameinto a TSDF global-model; and adopting a light ray projection algorithm to visualize a surface. According to the method, accuracy of the acquired pose is enabled to be high, the cumulative-error problem in three-dimensional-reconstruction processes is eliminated, and a real-time reconstruction result has higher accuracy.
Owner:CHONGQING UNIV OF POSTS & TELECOMM

Virtual view synthesis method based on homographic matrix partition

The invention discloses a virtual view synthesis method based on homographic matrix partition. The virtual view synthesis method based on homographic matrix partition comprises the following steps of 1) calibrating left and right neighboring view cameras to obtain the internal reference matrixes of the left and right neighboring view cameras and a basis matrix between the left and right neighboring view cameras, deriving an essential matrix from the basis matrix, performing singular value decomposition on the essential matrix, and computing the motion parameters including a rotation matrix and a translation matrix between the left and right neighboring view cameras; 2) performing interpolation division on the rotation matrix and the translation matrix to obtain sub homographic matrixes from left and right neighboring views to a middle virtual view; 3) applying the forward mapping technology to map two view images to a middle virtual view image respectively through the sub homographic matrixes, taking the mapping graph of one of the images as a reference coordinate system and performing interpolation fusion on the mapped two images to synthesize a middle virtual view image. The virtual view synthesis method based on the homographic matrix partition has the advantages of being high in synthesis speed, simple and effective in process and high in practical engineering value.
Owner:SOUTH CHINA UNIV OF TECH

Omnidirectional stereo vision three-dimensional rebuilding method based on Taylor series model

The invention discloses an omni-directional stereo vision three-dimensional reconstruction method based on Taylor series models. The method comprises the following: a step of calibrating a camera, which is to utilize a Taylor series model to calibrate an omni-directional vision sensor so as to obtain internal parameters of the camera; a step of obtaining epipolar geometric relation, which comprises the steps of calculating an essential matrix between binocular omni-directional cameras and extracting the rotation and translation component of the cameras; a step of correcting an outer polar line, which is to correct the outer polar line of a shot omni-directional stereo image so as to allow a corrected polar quadratic curve to coincide with an image scan line; and a step of three-dimensional reconstruction, which is to carry out feature point matching to the corrected stereo image and calculate the three-dimensional coordinates of points according to matching results. The method can be applicable to various omni-directional vision sensors, has the characteristics of wide application range and high precision, and can carry out effective three-dimensional reconstruction under the condition that the parameters of the omni-directional vision sensors are unknown.
Owner:ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products