Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

63 results about "Spatial calibration" patented technology

The process of spatial calibration involves calibrating a single image against known values, then applying that calibration to your uncalibrated image. This assumes, of course, that both images are at the same magnification.

Planar and spatial feature-based laser radar and camera automatic joint calibration method

PendingCN109828262ACalibration is fast and reliableThe operation process is fully automatedImage analysisElectromagnetic wave reradiationRadarDevices fixation
The invention provides a planar and spatial feature-based laser radar and camera automatic joint calibration method. According to the method, a laser radar and a camera are separately adopted to position planar and spatial features designed on a calibration object, so that the laser radar and the camera can be fast and reliably calibrated. The laser radar and the camera are mounted on a device tobe calibrated, and then, the device to be calibrated is fixed at the output end of an six-axis mechanical arm; in an initial state, a calibration plate is arranged just in front of the laser radar andthe camera, and two kinds of geometric features, namely, planar calibration features and spatial calibration features are designed on the calibration plate; and the camera extracts calibration pointsfrom the planar calibration features so as to use the calibration points, and the laser radar extracts calibration points from the spatial calibration features so as to use the calibration points; and the output end of the six-axis mechanical arm is driven to drive the device to be calibrated to move according to a set trajectory; the camera obtains a corresponding calibration plate image; and the internal and external parameters of the camera are calibrated by the Zhang Zhengyou calibration method.
Owner:TZTEK TECH

Scene three-dimensional data registration method and navigation system error correction method

InactiveCN106705965AComplementary characteristics are goodSolve the problem of corruptionNavigation by speed/acceleration measurementsNavigation systemVision sensor
The invention discloses a scene three-dimensional data registration method. The method comprises the following steps: an inertial sensor and a visual sensor perform spatial calibration and time synchronization; the inertial sensor outputs pose information at two adjacent moments, and then transmits the pose information to the visual sensor, and the visual sensor performs data registration on two adjacent frames of data according to the pose information given by the inertial sensor; the pose variation of the visual sensor is worked out through the data registration of the visual sensor; the inertial sensor is subjected to real-time correction through the pose variation of the visual sensor; fifthly, data is registered to a three-dimensional map after data registration performed through the visual sensor. The scene three-dimensional data registration method and the navigation system error correction method perform registration on the inertial sensor through the visual sensor, solve vagueness during motion estimation performed by a single visual sensor or inertial sensor, improve the motion object detection performance, reduce the disadvantage that the error of the inertial sensor is accumulated with passage of time, and improve the precision of the inertial sensor.
Owner:苏州中德睿博智能科技有限公司

Method for calibrating conic refraction and reflection camera of non-center axial symmetrical system

The invention relates to a method for calibrating a conic refraction and reflection camera of a non-center axial symmetrical system. The method includes: a calibration block image is shot by employing the conic refraction and reflection camera, principal point locus curves are constrained through image points of the refraction and reflection image, and the coordinate of a principal point is determined by multiple loci; the polar line of the principal point is solved by employing a contour circle formed by a certain image point, wherein the intersection point of the polar line and a tangent passing the image point and the intersection point of the principal point and a connecting line of the image points are a group of vanishing points in the orthogonal direction; and two groups of vanishing points in the orthogonal direction are calculated by employing two image points so that the scale factor and the distortion factor of the conic mirror refraction and reflection camera are solved. By employing the method, the cross ratio is determined by directly employing the distances between the points on lines of the spatial calibration block, and point selection on the lines of the spatial calibration block is easy, convenient and accurate so that the accuracy of calibration results is improved.
Owner:YUNNAN UNIV

Position information based ultrasonic wide view imaging method

The invention discloses a position information based ultrasonic wide view imaging method. The method includes the following steps that (1) when acquiring images, an ultrasonic probe moves along a straight line or an approximate straight line, a system acquires an ultrasonic two-dimensional image sequence in the scanning process, and position information of the images is obtained through a positioning device; (2) time is set for the system, each frame image acquired is corresponding to the position information thereof; (3) spatial calibration is performed to enable each frame image to be transformed into a coordinate system of the positioning device, position and direction information acquired by the positioning device in the moving process of the ultrasonic probe is recorded in a world coordinate system, and coordinates of each frame image in the world coordinate system are obtained by final transformation for the convenience of display; (4) each frame image is mapped onto a preset two-dimensional plane according to the spatial position information thereof in the world coordinate system to form a wide view image. By the method, calculated amount of wide view imaging can be reduced, and real-time performance can be improved.
Owner:SOUTH CHINA UNIV OF TECH

Large-object three-dimensional measurement LED label calibration method based on tracker

ActiveCN106643504AReduces the possibility of false matchesImprove stabilityUsing optical meansThree dimensional measurementMatching methods
The invention discloses a large-object three-dimensional measurement LED label calibration method based on a tracker. The method includes the following steps of (1) matching the LED labels by means of a trajectory-based LED label matching method: a stereoscopic tracker observes the motion trajectory of the LED labels in the view field of the stereoscopic tracker; matching the trajectories of the LED labels observed in the two cameras of the stereoscopic tracker; and measuring the three-dimensional coordinates of the LED labels in the stereoscopic tracker coordinate system according to the matching result; and (2) calibrating the LED labels with an LED label calibration method based on a checkerboard: the three-dimensional coordinates of the LED labels in the stereoscopic tracker coordinate system are converted into the coordinate system of a three-dimensional scanner to realize the calibration of the LED labels. According to the method, the LED label matching accuracy is high, the point characteristics of the spatial calibration objects can be accurately found, the calibration precision and robustness are high, and the calibration of the LED labels in the three-dimensional measurement can be conveniently and accurately carried out.
Owner:JIANGSU UNIV OF SCI & TECH

City component positioning method and device and vehicle-mounted mobile measurement system

The invention provides a city component positioning method and device and a vehicle-mounted mobile measurement device. The method comprises the steps of obtaining an image of the current street view,images of adjacent street views and corresponding GPS phase center coordinate data and space posture data; extracting at least one appointed city component; respectively obtaining a first pixel pointcoordinate and a second pixel point coordinate, of the same name, of the same city component; and calculating spatial geographic coordinates of the city components on the basis of a forward intersection method by combining a spatial calibration parameter and the corresponding GPS phase center coordinate data and space posture data according to the first pixel point coordinate and the second pixelpoint coordinate. The method is capable of carrying out automatic recognition and rapid positioning of the city components, so that the automatic updating of the appointed city components is realizedand the fussy artificial recognition and edition work is avoided, thereby greatly improving the updating efficiency, in related thematic maps, of the city components, and providing reliable guaranteefor the thematic application of the departments such as traffic, firefighting, public security and the like.
Owner:XIAN AEROSPACE TIANHUI DATA TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products