Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

228 results about "Reprojection error" patented technology

The reprojection error is a geometric error corresponding to the image distance between a projected point and a measured one. It is used to quantify how closely an estimate of a 3D point recreates the point's true projection . More precisely, let be the projection matrix of a camera and be the image projection of , i.e. . The reprojection error of is given by , where denotes the Euclidean distance between the image points represented by vectors and .

Calibration method of correlation between single line laser radar and CCD (Charge Coupled Device) camera

The invention discloses a calibration method of correlation between a single line laser radar and a CCD (Charge Coupled Device) camera, which is based on the condition that the CCD camera can carry out weak imaging on an infrared light source used by the single line laser radar. The calibration method comprises the steps of: firstly, extracting a virtual control point in a scanning plane under the assistance of a cubic calibration key; and then filtering visible light by using an infrared filter to image infrared light only, carrying out enhancement, binarization treatment and Hough transformation on an infrared image with scanning line information, and extracting two laser scanning lines, wherein the intersection point of the two scanning lines is the image coordinate of the virtual control point in the image. After acquiring multiple groups of corresponding points through the steps, a correlation parameter between the laser radar and the camera can be solved by adopting an optimization method for minimizing a reprojection error. Because the invention acquires the information of the corresponding points directly, the calibration process becomes simpler and the precision is greatly improved with a calibrated angle error smaller than 0.3 degree and a position error smaller than 0.5cm.
Owner:NAT UNIV OF DEFENSE TECH

Wide-baseline visible light camera pose estimation method

The invention relates to a wide-baseline visible light camera pose estimation method. The method includes the steps that firstly, the Zhang calibration method is used, and internal references of cameras are calibrated through a plane calibration board; eight datum points on a landing runway are selected in a public visual field region of the cameras and world coordinates of the datum points are accurately measured off line through a total station; in the calibration process, cooperation identification lamps are placed in the positions of the datum points and the poses of the cameras are accurately calculated through detection of the cooperation identification lamps. According to the method, the complex natural scene characteristic of an unmanned aerial vehicle landing scene and the physical light sensing characteristic of the cameras are considered, and glare flashlights are designed and used as the cooperation identification lamps of the visible light cameras; the eight datum points are arranged on the landing runway and space coordinates of the datum points are measured through the total station according to space accuracy at a 10 <-6> m level. According to the method, a calibration result is accurate and a re-projection error on an image is below 0.5 pixel.
Owner:NORTHWESTERN POLYTECHNICAL UNIV

Indoor and independent drone navigation method based on three-dimensional vision SLAM

The invention provides an indoor and independent drone navigation method based on three-dimensional vision SLAM. The indoor and independent drone navigation method comprises the steps that an RGB-D camera obtains a colored image and depth data of a drone surrounding environment; a drone operation system extracts characteristic points; the drone operation system judges whether enough characteristicpoints exist or not, if the quantity of the characteristic points is larger than 30, it shows that enough characteristic points exist, the drone attitude calculation process is conducted, or, relocating is conducted; a bundling optimizing method is used for global optimization; an incremental map is built. Drone attitude information is given with only one RGB-D camera, a three-dimensional surrounding environment is rebuilt, the complex process that a monocular camera solves depth information is avoided, and the problems of complexity and robustness of a matching algorithm in a binocular camera are solved; an iterative nearest-point method is combined with a reprojection error algorithm, so that drone attitude estimation is more accurate; a drone is located and navigated and independentlyflies indoors and in other unknown environments, and the problem that locating cannot be conducted when no GPS signal exists is solved.
Owner:江苏中科智能科学技术应用研究院

Method and system for laser-IMU external parameter calibration

ActiveCN111207774AImprove accuracyAvoid problems such as inability to solve equations caused by data ambiguityWave based measurement systemsEngineeringCalibration result
The invention provides a method and system for laser-IMU external parameter calibration. The method comprises steps of acquiring IMU measurement data and laser radar measurement data; carrying out IMUpre-integration of the obtained IMU measurement data, carrying out calculation to obtain an IMU pose transformation estimated value of the IMU relative to the IMU initial pose at the next moment, andaccording to the estimated value and an actual measurement value of the IMU at the next moment, obtaining an associated residual error associated with the data; processing measurement data of the laser radar, utilizing IMU pre-integration to obtain projection coordinates of reprojecting a plurality of laser radar points to a world coordinate system, and calculating a reprojection error from eachlaser radar point to a calibration target map; adopting a nonlinear least square method to iteratively optimize laser radar-IMU external parameter calibration so that the external parameter calibration result can be obtained. The method is advantaged in problems that in laser-IMU external parameter calibration, mechanical external parameters are not easy to obtain, manual measurement errors are large, and measurement is troublesome are solved, defects of the laser radar and the IMU are mutually overcome to a certain extent, and pose solving precision and the speed of the SLAM method can be improved.
Owner:济南市中未来产业发展有限公司

Tightly coupled binocular vision-inertial SLAM method using combined point-line features

The invention relates to a tightly coupled binocular vision-inertial SLAM method using combined point-line features, comprising the following steps: determining a transformation relationship between acoordinate system of a camera and a coordinate system of an inertial measurement unit (IMU); establishing a thread of point-line features and IMU tracking to solve an initial three-dimensional point-line coordinate; using the IMU to predict the position of the point-line features to correctly establish associations between the features and data, and combining a re-projection error of the IMU andthe point-line features to solve the pose transformation of consecutive frames after initializing the IMU; establishing a thread of the local bundle adjustment of the point-line features and the IMU,optimizing the three-dimensional point-line coordinate, the pose of a key frame and a state quantity of the IMU in a local key frame window; and establishing a loop back detection thread for the point-line features, using the point-line features to weight and calculate the score of a word bag model to detect the loop back, and optimizing the global state quantity. The tightly coupled binocular vision-inertial SLAM method using the combined point-line features is capable of ensuring stability and high precision in the case where the number of feature points is few and the camera is moving quickly.
Owner:SHANGHAI INST OF MICROSYSTEM & INFORMATION TECH CHINESE ACAD OF SCI

Method for calibrating and optimizing camera parameters of vision measuring system

The invention provides a method for calibrating and optimizing camera parameters of a vision measuring system. The method comprises the following steps of: (1) extracting a circle center of a projection point of one point on a calibration target on an image surface as an image surface coordinate; calculating initial values of internal parameters and external parameters of a camera of the vision measuring system according to the image surface coordinate; (3) optimizing camera distortion coefficients and the internal parameters and the external parameters of the camera by taking an object surface coordinate of the calibration target as constants, and calculating the sum C1 of reprojection errors of all feature points in the different directions on the calibration target on the image surface; (4) optimizing the object surface coordinate of the calibration target by taking the optimized camera distortion coefficients and the internal parameters and the external parameters of the camera as constants and taking the object surface coordinate of the calibration target as variables, and calculating the sum C2 of reprojection errors; (5) selecting cycle conditions, and returning to the step (3) if the cycle conditions are untenable; and (6) making the sum C1 and the sum C2 of the reprojection errors minimum respectively, and thus acquiring the optimized internal parameters and external parameters of the camera and the object surface coordinate.
Owner:BEIJING INFORMATION SCI & TECH UNIV

Multi-view stereoscopic video acquisition system and camera parameter calibrating method thereof

ActiveCN102982548AImprove collection efficiencyAvoid Camera Parameter CalibrationImage analysisStereoscopic videoPoint cloud
The invention provides a multi-view stereoscopic video acquisition system and a camera parameter calibrating method thereof. The method comprises the following steps of obtaining inside and outside parameters of cameras in the system, acquiring multi-view images of common scenes by the cameras at the same time, detecting and matching characteristic points of the multi-view images to obtain matching points among the view images, utilizing the parameters of the cameras to reconstruct and obtain three-dimensional space point cloud coordinates of the matching points among the view images, conducting adjustment and optimization with a thin bundle set to obtain a reprojection error according to the three-dimensional space point cloud coordinates and the inside and outside parameters of the cameras, optimizing the reprojection error and the inside and outside parameters of the cameras, judging whether to conduct secondary optimization according to the optimized reprojection error, and judging whether to recalibrate the parameters according to a secondary optimization result. According to the camera parameter calibrating method, with the adoption of detecting and matching of the characteristic points and adjusting and optimizing of the thin bundle set, the complicated parameter calibration of the cameras is avoided; and the acquisition efficiency of a stereoscopic video is improved.
Owner:TSINGHUA UNIV

Dynamic calibration system, and combined optimization method and combined optimization device in dynamic calibration system

The invention provides a dynamic calibration system, and a combined optimization method and a combined optimization device in the dynamic calibration system. The dynamic calibration system comprises a marker geometry feature extraction module, a camera parameter estimation module and a calibrate result evaluation module; the marker geometry feature extraction module detects a marker in the image shot by the camera, extracts geometry features of the marker, and traces and matches the extracted geometry features on images of follow-up frames which are provided by the camera; the camera parameter estimation module is used for obtaining the error of repeated projection according to the extracted geometry features of the marker and current poses of various cameras and obtaining a calibration result when the repeated projection error satisfies a set error threshold; all cameras are positioned in initial position states during the first operation; and the calibration result evaluation module is used for evaluating the accuracy of the calibration result and determining whether to receive parameters of the calibration result. The calibration parameter of the invention has relatively high accuracy, and the dynamic calibration system, the combined optimization method and the combined optimization device in the dynamic calibration system are applicable to the dynamic streamline calibration, high in execution efficiency, wide in application range and simple in system.
Owner:SHANGHAI OUFEI INTELLIGNET VEHICLE INTERNET TECH CO LTD

Method and device for acquiring external parameters of vehicle-mounted camera

The invention discloses a method and a device for acquiring external parameters of a vehicle-mounted camera. The method in one particular embodiment comprises steps: an image set generated by using atarget vehicle-mounted camera and a reference camera on the vehicle to photograph a preset marker set when the vehicle is in different positions is acquired; a marker in the image and an angular pointin the marker are recognized, and according to coordinates of the angular point in the image, coordinates in a world coordinate system and the internal parameters of the vehicle-mounted camera, the position information and the attitude information in the case of image photographing by the vehicle-mounted camera are determined; according to the position information and the attitude information ofthe target vehicle-mounted camera and the reference camera when the vehicle is in different positions, the external parameters of the target vehicle-mounted camera relative to the reference camera when the vehicle is in the position are determined; and according to re-projection errors of each angular point, the external parameters are adjusted to be the external parameters of the target vehicle-mounted camera. Thus, the external parameters of multiple vehicle-mounted cameras can be accurately acquired.
Owner:BAIDU ONLINE NETWORK TECH (BEIJIBG) CO LTD

Vehicle mileage calculation method based on double-eye vision

The invention belongs to the technical field of autonomous navigation of intelligent traffic vehicles, and particularly relates to a vehicle mileage calculation method based on double-eye vision. The vehicle mileage calculation method comprises the following steps of obtaining a video stream of a double-eye camera which is fixedly arranged at the top part of a vehicle, and transmitting to a vehicular processor; respectively extracting the features of a left image and a right image from the image of each frame of video stream of the double-eye camera, combining with the features of a left image and a right image of the previous frame, and searching a matched feature point set by a feature matching method; according to the matched feature points of the previous frame, using a three-dimensional vision method to calculate the space coordinates of a corresponding three-dimensional point; re-projecting the space coordinates of the three-dimensional point in the previous step to the coordinates of a two-dimensional image of the existing frame, and using a GN iteration algorithm to solve the minimum error of re-projection, so as to solve the motion transformation value of the vehicle of the adjacent frame; according to the motion transformation value of the vehicle, accumulating and updating the mileage information of the vehicle motion. The vehicle mileage calculation method has the advantages that by matching and combining the quick features and the high-precision features, the precision is improved, and the calculating speed is guaranteed.
Owner:TSINGHUA UNIV

GPS-fused robot vision inertial navigation integrated positioning method

The invention discloses a GPS-fused robot vision inertial navigation integrated positioning method. The method comprises the following steps: extracting and matching feature points of left and right images and front and back images of a binocular camera, and calculating three-dimensional coordinates of the feature points and relative poses of image frames; selecting a key frame in an image stream,creating a sliding window, and adding the key frame into the sliding window; calculating a visual reprojection error, an IMU pre-integration residual error and a zero offset residual error and combining the errors into a joint pose estimation residual error; carrying out nonlinear optimization on the joint pose estimation residual error by using an L-M method to obtain an optimized visual inertial navigation (VIO) robot pose; if the GPS data exist at the current moment, performing adaptive robust Kalman filtering on the GPS position data and the VIO pose estimation data to obtain a final robot pose; and if no GPS data exist, replacing the final pose data with the VIO pose data. According to the method, the positioning precision of the robot is improved, the calculation consumption is reduced, and the demands of large-range and long-time inspection are satisfied.
Owner:NANJING UNIV OF SCI & TECH

A method for solving 3D coordinates of spatial points based on mathematical model of stereo vision

ActiveCN109272570ARealization of high precision solutionSolution implementationImage analysis3D modellingMathematical modelOptical axis
The invention provides a high-precision solution method for three-dimensional coordinates of a space point in a world coordinate system considering multiple factors in an internal reference based on astereo vision mathematical model. The invention is based on a binocular stereo vision system, after calibrating the initial parameters of left and right cameras in binocular stereo vision system, thecamera parameters are optimized iteratively by the principle of image reprojection error minimization. The internal parameters of the left camera and the right camera are unequal focal length, and the offset of the camera optical axis in the left and right camera coordinates are obtained. Taking photos of the same target point, the precision of recovering the 3D information of the target point isimproved by optimizing the parameters of the left and right cameras. The relationship between the left image coordinate system and the world coordinate system is established, the relationship betweenthe right camera coordinate system and the world coordinate system is established, and the relationship between the right image coordinate system and the right camera coordinate system is established. Through the relationship among the left image coordinate system, the right image coordinate system, the world coordinate system and the right camera coordinate system, the high-precision solution ofthe three-dimensional coordinates of the space points is realized.
Owner:HEFEI UNIV OF TECH

Binocular calibration method based on chaotic particle swarm optimization algorithm

ActiveCN105654476ASolve the problem of easy to fall into local extremumGuaranteed accuracyImage enhancementImage analysisChaotic particle swarm optimizationImage pair
The invention provides a binocular calibration method based on a chaotic particle swarm optimization algorithm. A plurality of sets of dot array planar calibration board image pairs with different poses are simultaneously photographed through two image cameras. On condition that distortion is not considered, initial values of inner parameters and outer parameters of a left image camera and a right image camera are obtained by means of a Zhang's planar template linear calibration method. Then on condition that a two-order radial distortion and a two-order tangential distortion are considered, a three-dimensional reprojection error is minimized by means of the chaotic particle swarm optimization algorithm, thereby obtaining final inner parameter and final outer parameter of the two image cameras. In an iteration optimization process, a global adaptive inertia weight (GAIW) is introduced. A particle local neighborhood is constructed by means of a dynamic annular topological relationship. Speed and current position are updated according to an optimal fitness value in the particle local neighborhood. Furthermore chaotic optimization is performed on the optimal position which corresponds with the optimal fitness value in the particle local neighborhood. The binocular calibration method effectively settles a problem of low calibration precision caused by a local extreme value in a previous particle swarm optimization algorithm, thereby improving binocular calibration precision and ensuring high precision in subsequent binocular three-dimensional reconstruction.
Owner:湖州菱创科技有限公司

Light field camera calibration method based on multi-center projection model

ActiveCN110310338AOvercoming reconstruction inaccuraciesImage analysisPoint lightAngular point
The invention provides a light field camera calibration method based on a multi-center projection model. The method comprises the steps of shooting the calibration plates under different postures by moving a calibration plate or a light field camera; obtaining the light field data, determining the corner points on the calibration plates and a matched corner point light set; constructing a linear constraint of the light rays of a light field coordinate system of the light field camera and the three-dimensional space points, calculating the internal parameters of the light field camera and the external parameters under the corresponding attitudes through the linear initialization, establishing a cost function based on a reprojection error, and performing iteration to obtain an optimal solution of the internal parameters, the external parameters and the radial distortion parameters of the light field camera to be calibrated. According to the method, the light rays in space are recorded bythe light field camera essentially, due to the fact that the light rays are parameterized by adopting double parallel planes and absolute coordinates, the problem that three-dimensional point reconstruction is inaccurate is solved, and the purpose of accurately and robustly calibrating the internal parameters of the light field camera is achieved.
Owner:NORTHWESTERN POLYTECHNICAL UNIV

Pose estimation method for depth camera

ActiveCN110503688AImprove the success rate of convergenceImprove robustnessImage enhancementImage analysisPattern recognitionColor image
The invention relates to the technical field of fixed-point tracking, and discloses a pose estimation method for a depth camera, and the method comprises the steps: enabling the depth camera to be disposed on a moving mechanism for photographing, obtaining a depth image and an RBG color image, and extracting ORB feature point pairs from every two adjacent photographed images; calculating an estimated value xiP of the pose change of the camera by utilizing N ORB feature point pairs with missing depth information, and calculating an estimated value xiQ of the pose change of the camera by utilizing M ORB feature point pairs with complete depth information so as to obtain a total estimated value xi0; constructing a minimum reprojection error model fusing the ORB feature point pairs with missing depth information and the error information corresponding to the ORB feature point pairs with complete depth information, and further obtaining a corresponding jacobian matrix J; and according to the total estimated value xi0, the minimum reprojection error function and the Jacobian matrix J, calculating an optimized total estimated value xik by using a nonlinear optimization method, thereby completing estimation of camera pose change in a shooting process of two adjacent frames of images.
Owner:SHANGHAI UNIV OF ENG SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products