Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

99 results about "Collinearity equation" patented technology

The collinearity equations are a set of two equations, used in photogrammetry and remote sensing to relate coordinates in a sensor plane (in two dimensions) to object coordinates (in three dimensions). The equations originate from the central projection of a point of the object through the optical centre of the camera to the image on the sensor plane.

Visual navigation/inertial navigation full combination method

The invention relates to a visual navigation / inertial navigation full combination method. The method comprises the following steps: first, calculation of visual navigation: observation equations are listed based on collinearity equations, carrier positions and attitude parameters are obtained through the least square principle and adjustment,, and variance-covariance arrays among the parameters are calculated; second, calculation of inertial navigation: navigation calculation is carried out in the local horizontal coordinates, carrier positions, speeds and attitude parameters of each moment are obtained, variance-covariance arrays among the parameters are calculated; third, correction of the inertial navigation system through the visual system: by means of the Kalman filtering, navigation parameter errors and device errors of the inertial navigation system are estimated, and subjected to compensation and feedback correction, and therefore the optimal estimated values of all the parameters of the inertial navigation system are obtained; fourth, correction of the visual system through the inertial navigation system: all the parameters of the visual system are corrected through the sequential adjustment treatment. Compared to the prior art, the method has advantages of rigorous theories, stable performances, high efficiency and the like.
Owner:TONGJI UNIV

Multi-line laser radar and multi-path camera mixed calibration method

The invention discloses a multi-line laser radar and multi-path camera mixed calibration method. The method comprises the following steps of S1, collecting the original image data of a multi-path camera, the point cloud data of a multi-line laser radar and the point cloud data of a static laser radar; S2, solving an internal reference model of each camera; S3, subjecting images acquired by each camera to de-distortion treatment and obtaining corrected images; S4, registering the point cloud data of the static laser radar into the point cloud coordinate system of the multi-line laser radar; S5, acquiring the position (Xs,Ys,Zs) of each camera in the point cloud coordinate system of the multi-line laser radar in the point cloud data which are well registered in the step S4; S6, selecting the pixel coordinates (u,v) of at least four target objects in the corrected images of each camera and the corresponding three-dimensional coordinates (Xp,Yp,Zp) of the target objects in the point cloud with the multi-line laser radar as an coordinate origin; S7, according to the internal reference model of each camera, the position (Xs,Ys,Zs) of each camera, the pixel coordinates (u,v) of target objects corresponding to the camera, and the corresponding three-dimensional coordinates (Xp,Yp,Zp), establishing a collinear equation. In this way, the attitude angle elements of each camera and the cosines of the camera in nine directions can be figured out. Therefore, the calibration is completed.
Owner:XIAMEN UNIV

Optical remote sensing satellite rigorous imaging geometrical model building method

The invention provides an optical remote sensing satellite rigorous imaging geometrical model building method. The optical remote sensing satellite rigorous imaging geometrical model building method comprises the following steps that the geometrical relationship between the image point coordinates of an optical remote sensing satellite and the satellite is determined according to design parameters and on-orbit calibration parameters of an optical remote sensing satellite camera and the installation relation of the camera and the satellite; the shooting position of a satellite image is determined according to the GPS carried by the optical remote sensing satellite, the observation data of a laser corner reflector and the installation relation of the laser corner reflector and the satellite; the shooting angle of the satellite image is determined according to a star sensor carried by the optical remote sensing satellite and the observation data of a gyroscope and the installation relation of the gyroscope and the optical remote sensing satellite, a collinearity equation of all image points of the optical remote sensing satellite is built, and a rigorous imaging geometrical model of optical remote sensing satellite images is formed. The optical remote sensing satellite rigorous imaging geometrical model building method is the basis of optical remote sensing satellite follow-up geometrical imaging processing and application.
Owner:SATELLITE SURVEYING & MAPPING APPL CENTSASMAC NAT ADMINISTATION OF SURVEYING MAPPING & GEOINFORMATION OF CHINANASG

Geometric correction method of airborne imaging hyperspectrum of unmanned aerial vehicle

ActiveCN106127697ALow-precision POS data optimizationHigh precisionImage enhancementImage analysisAngular pointUncrewed vehicle
The invention discloses a geometric correction method of an airborne imaging hyperspectrum of an unmanned aerial vehicle. At present, an image obtained according to POS (Positioning and Orientation System) data exhibits an overlarge deviation with practicality, and the point, line and surface distortion of scenery is large and is difficult in geometric fine correction through polynomial correction. The method comprises the following steps: collecting the position gesture information of a low-precision POS sensor of a current unmanned aerial vehicle to correct a collimation axis error; according to the angular point and outline information of ground object characteristics, preprocessing the POS data to obtain corresponding elements of exterior orientation; carrying out collinearity equation correction; and through a ground correction point, carrying out polynomial correction. By use of the geometric correction method, a relationship between natural ground object characteristics and the own error of the sensor is considered, correction accuracy is improved, the aerial photography low-precision POS data of the unmanned aerial vehicle is optimized, the aerial photography image of the unmanned aerial vehicle can be accurately corrected only by carrying the low-precision POS sensor and a hyperspectral imager, and technical support is provided for the wide application of the current low-cost imaging hyperspectrum of an unmanned aerial vehicle.
Owner:天岸马科技(黑龙江)有限公司

Lunar rover binocular vision navigation system calibration method

ActiveCN101876555AHave visibilitySolve problems that cannot be obtained by direct measurement methodsMeasurement devicesTheodolitePrism
The invention discloses a lunar rover binocular vision navigation system calibration method. A measurement system measures a calibration device comprising a light-reflecting measurement mark and a coded mark to obtain a control point coordinate, two cameras to be calibrated respectively shoot the calibration device, and internal parameters of the two cameras to be calibrated and external parameters of the two cameras to be calibrated at different positions relative to the calibration device can be calibrated by adopting a collinearity equation; and a theodolite measurement system is matched with the calibration device to calibrate respective prism square coordinate system of the two cameras to be calibrated, and respective external parameters of the two cameras to be calibrated are calibrated by utilizing the respective prism square coordinate system and the external parameters of the two cameras to be calibrated at different positions relative to the calibration device respectively. The lunar rover binocular vision navigation system calibration method radically solves the problem that a coordinate system of a camera cannot be obtained by a direct measurement method, and makes the coordinate system of the camera visible; meanwhile, the lunar rover binocular vision navigation system calibration method has the advantages of simple operation, high calibration precision and high work efficiency.
Owner:BEIJING INST OF CONTROL ENG

Calibration method for lunar rover binocular vision navigation system

ActiveCN101726318AHave visibilityImprove the problem of low calibration accuracyMeasurement devicesTheodoliteLight reflection
The invention relates to a calibration method for a lunar rover binocular vision navigation system, which is designed against the characteristics of the binocular vision navigation system and comprises the steps of utilizing a measurement system for measuring a calibration device containing a back light reflection measurement mark and an encoding mark, obtaining a control point coordinate, utilizing a camera to be calibrated to carry out photographing on the calibration device, and then adopting a collinearity equation for calibrating internal parameters of the camera to be calibrated and external parameters of the camera to be calibrated under a coordinate system of the calibration device; and utilizing a theodolite measurement system to be matched with the calibration device for calibrating the coordinate system of a prism square, and utilizing the relationship between the coordinate system of the prism square and the external parameters of the camera for calibrating the external parameters of the camera to be calibrated under the coordinate system of the prism square. The calibration method makes full use of known information of a large number of control points of the calibration device, and an industrial measurement system is matched, thereby improving the problem of low calibration precision of the external parameters of the traditional camera, realizing simple operation and high calibration precision, completing the calibration of the internal parameters of the camera to be calibrated and the relative external parameters, completing the calibration within a hour and greatly improving the working efficiency.
Owner:BEIJING INST OF CONTROL ENG

Generation and auxiliary positioning method for live-action semantic map of smart scenic spot

The invention discloses a generation and auxiliary positioning method of a live-action semantic map of a smart scenic spot. The generation and auxiliary positioning method comprises the following steps: acquiring a scenic spot panoramic image and positioning information, extracting scenic spot image features to perform semantic description and scene classification identification marking, and establishing a scenic spot live-action semantic map database; collecting a scene stereo image pair at the current position by a user; matching the photo image features and the semantic information with theimage features and the semantics of the local semantic map of the scenic spot respectively; and acquiring a scenery image closest to the photographic film, extracting the spatial position informationof the scenery image marked in the semantic map, resolving the accurate position of the user photography through a spatial forward intersection collinear equation, and displaying the position coordinates of the user on the scenic spot electronic map in real time. According to the generation and auxiliary positioning method, the fine position of the user is calculated through the main steps, and the defects of insufficient live-action semantic information and inaccurate positioning in an electronic map are overcome.
Owner:GUILIN UNIVERSITY OF TECHNOLOGY

Multi-area array aerial camera platform calibration method with constraint condition

The invention discloses a multi-area array aerial camera platform calibration method with a constraint condition. The method includes: employing a data acquisition strategy of multiple cross flying camera station exposure to acquire multiple groups of sub-images with an adjacent virtual image overlap degree of more than 80%; making use of the control point of a ground calibration field to calculate a photographing centre distance of sub-cameras and a sub-camera line element; during aerotriangulation, conducting bundle block adjustment, according to the control point coordinate, the connection points among matched sub-images, and the external orientation element initial value, establishing a model through a collinearity equation, adopting the photographing centre distance of the sub-cameras as a given value, i.e. taking the sub-camera line element constant as the constraint condition, and taking platform calibration parameters as a whole to perform calculation to solve the angle elements in the external orientation elements of the sub-images. According to the calibration method, a lot of uniformly distributed connection points are matched, precision of the platform calibration parameters is improved through the constraint condition, higher stitching precision of the generated virtual images can be guaranteed, and the mapping precision can be higher.
Owner:CHINA TOPRS TECH

Multi-lens sensor-based image fusion method

ActiveCN107492069AReal geographical environment sceneImage enhancementImage analysisThree-dimensional spaceMathematical model
The invention provides a multi-lens sensor-based image fusion method, and belongs to the field of image synthesis. The method includes: acquiring multiple images on the basis of a multi-lens sensor; constructing coordinate parameters according to a relative position relationship of the multi-lens sensor, and completing coordinate transformation, which is of image points on a target object in the multiple images, from plane space and to three-dimensional space, according to the coordinate parameters to obtain transformed images; extracting contour information of the target object according to classical collinearity equations, orientation elements in the multi-lens sensor and attribute values of feature points; and splicing the multiple images according to the contour information to obtain a spliced image. Through above processing, the five images can be spliced into the one image, and relative space relationships between different regions in the target object can be obtained on the basis of the obtained image to establish a strict spatial mathematical model of a multi-lens camera. On the basis of the technology, a user can carry out fast unmanned-aerial-vehicle line inspection of transmission lines, fuse the images collected by multiple lenses, and establish a real geographical environment scene.
Owner:NINGBO POWER SUPPLY COMPANY STATE GRID ZHEJIANG ELECTRIC POWER +1

Thermal infrared image stitching method

The invention relates to a thermal infrared image stitching method, which comprises the following steps that: calibrating an infrared thermal imager to obtain an inner orientation element, and carrying out aerial survey on ground through the aircraft carried infrared thermal imager and a positioning and orientation system to obtain an infrared image sequence and position gesture data; according to the inner orientation element, the infrared image sequence and the position gesture data at a corresponding photographing moment, adopting a collinearity equation method to carry out geometric coarse correction on each image in the infrared image sequence to obtain a value file and an absolute coordinate file; carrying out image preprocessing on the value file to carry out image stretching to obtain an area of which the gray level distribution is relatively centralized and continuous; adopting an SIFT (Scale-Invariant Feature Transform) feature point matching method and a gray level distribution method to carry out image matching on two images before and after stretching is carried out; utilizing an absolute coordinate difference value between correct matching points in the two images to carry out integral error correction on the absolute coordinate file of the rear image; and on GIS (Geographic Information System) software, on the basis of the absolute coordinate file, carrying out the embedding of mass images.
Owner:INST OF DEFENSE ENG ACADEMY OF MILITARY SCI PLA CHINA

Geometric pretreatment method for vertical rail swing images of satellite-borne linear array sensor

InactiveCN103778610ARealize splicingEliminate internal geometric distortionImage enhancementPretreatment methodRectangular coordinates
The invention discloses a geometric pretreatment method for vertical rail swing images of a satellite-borne linear array sensor. The geometric pretreatment method comprises steps of establishing a collinearity equation model of original single-frame images according to the imaging geometry of the original single-frame images; performing geometric correction treatment on all the original single-frame images, dividing all the original single-frame images into virtual three-dimensional grids, resolving rational polynomial model coefficients corresponding to the original single-frame images, establishing a positive and negative calculation relation of each original single-frame image coordinate and a tangent plane image coordinate, and performing geometric correction on all the original single-frame images on the basis of a rational polynomial model to obtain frame images under an object space tangent plane coordinate, wherein the geometric correction treatment comprises constructing a mutual conversion relation of an object space local coordinate and a geocentric rectangular coordinate and constructing a mutual conversion relation of the tangent plane image coordinate and the object space local coordinate; splicing all the frame images under the object space tangent plane coordinate on the basis of the coordinate to obtain the spliced images; resolving the rational polynomial model coefficients corresponding to the spliced images.
Owner:WUHAN UNIV

Measurement method for relative exterior orientation elements by using arbitrary photographic image pair of ground without any known photogrammetric control point

ActiveCN105241422ADetermine Relative Outer Orientation ElementsSimple and fast operationPicture interpretationObservation pointNormal case
The invention discloses a measurement method for relative exterior orientation elements by using an arbitrary photographic image pair of ground without any known photogrammetric control point. The measurement method provided by the invention comprises the following steps: directed at a mark-free condition, selecting two observation points near a to-be-measured area by utilizing arbitrary photogrammetric measurement technology, subjecting a to-be-measured object to nearly horizontal normal case photographing with a fixed-focus camera so as to obtain stereoscopic image pairs, and calculating the relative exterior orientation elements of the camera by utilizing a collinearity equation mathematic model and more than 6 corresponding image points; vertically placing two fixed-length sighting rods near the to-be-measured object, respectively photographing the two observation points, then calculating coordinates of object space of a rod endpoint through the lengths of the fixed-length sighting rods, and calculating the relative exterior orientation elements of the camera by utilizing a mathematic model and point coordinates; and vertically placing a fixed-length sighting rod near the to-be-measure object, then selecting more than 3 known points in a photographed image and calculating the relative exterior orientation elements of the camera during photographing by utilizing the length of the fixed-length sighting rod and the three known points.
Owner:BEIJING FORESTRY UNIVERSITY

Method for realizing geographic calibration of commercial camera photo based on positioning and orientation data

The invention relates to a method for realizing the geographic calibration of a commercial camera photo based on positioning and orientation data. The method comprises a step of acquiring a photo taken by a commercial camera in the air, a step of acquiring attribute information of the photo, wherein the attribute information includes an exposure time, a focal length, a CCD pixel size and an imageframe size, a step of obtaining the positioning and orientation data of the commercial camera at the exposure time, a step of solving a ground coordinate value of a characteristic point of a shootingarea displayed in the photo according to a collinearity equation based on the attribute information, the positioning and orientation data and a digital elevation model (DEM) or average ground elevation value, and a step of carrying out geographic calibration on the photo according to the ground coordinate value of the characteristic point. It can be seen that the geographic calibration of the photo is carried out through obtaining the positioning and orientation data of the commercial camera in taking the photo based on the photo attribute information, the positioning and orientation data, andthe DEM or average ground elevation value, and the position information of photo center point and corner point of the photo taken by the commercial camera, photo covering ground range information andactual location information of any target in the photo can be determined.
Owner:中国海监南海航空支队

Monocular camera-based planar moving target visual navigation method and system

The invention relates to a monocular camera-based planar moving target visual navigation method and system. The method comprises the following steps of establishing a world coordinate system, a cameracoordinate system, a moving target coordinate system and a picture plane coordinate system, enabling the monocular camera to form a 2D-3D point pair by acquiring a control point and an image point pair of the control point falling on an image plane physical coordinate system in real time in the advancing process of the moving target, conducting coordinate transformation on the 2D-3D point pairs according to a coordinate system, and solving the pose of a moving target, and assisting the moving target to move forward according to the moving target pose. According to the method, an idea method of space geometry modeling and an RANSAC algorithm are adopted, a set of all possible positions of a camera optical center is modeled into a space circle with a control point as the circle center and the distance from the control point to the optical center as the radius, unknown position parameters and attitude parameters in a collinear equation are decoupled and then solved respectively, and meanwhile, the RANSAC algorithm is used for eliminating outliers; and the method is a high-speed, high-robustness and high-precision real-time visual navigation method.
Owner:NAT UNIV OF DEFENSE TECH

Monocular video high precision measuring method for wing wind tunnel test model elastic deformation

The invention discloses a monocular video high precision measuring method for wing wind tunnel test model elastic deformation. Based on a fact that relative deformation of two adjacent cross sections of a wing wind tunnel test model is small linear elasticity deformation, coordinate figures of corners and deformation mark points Y of all cross sections are orderly calculated from a wing root according to a superposition principle; a conventional monocular video measuring method is adopted, the coordinate figures of the deformation mark points Y are brought in a collinearity equation, and deformation data of the mark points on all the cross sections can be orderly obtained. Via the monocular video high precision measuring method, monocular video measurement errors of wing wind tunnel test model elastic deformation can be greatly reduced, only one camera needs to be used, multi-view video measurement precision can be obtained, hardware cost of a measurement device can be lowered, tedious homonymous point matching work of multi-view video measurement can be prevented, the monocular video high precision measuring method is particularly suitable for an environment where camera installation positions are limited, and the monocular video high precision measuring method has great engineering application prospects.
Owner:INST OF HIGH SPEED AERODYNAMICS OF CHINA AERODYNAMICS RES & DEV CENT

Defect detection system and method based on three-dimensional laser scanning technology

According to the defect detection system and method based on the three-dimensional laser scanning technology, a two-dimensional image and corresponding three-dimensional point cloud information are obtained, three-dimensional point cloud is converted into an intensity image, and the intensity image and the two-dimensional image are registered; angular point extraction is performed on the two-dimensional image and the intensity image by utilizing harris according to a registration result, and a corresponding relation and extraction of corresponding homonymy points are achieved in the angular points; according to the corresponding relation of the homonymy points and a collinearity equation, the mapping relation of the three-dimensional point cloud coordinates and the two-dimensional image is achieved; and after the features of the two-dimensional image are extracted, corresponding three-dimensional feature point cloud information is mapped according to the mapping relation between the three-dimensional point cloud and the two-dimensional image, and the obtained feature information is compared with the three-dimensional point cloud information of the standard part to obtain whether defects exist or not. Extraction of three-dimensional point cloud information feature points which are difficult to process is converted into extraction of two-dimensional image information feature points, the defects that three-dimensional point cloud processing is high in difficulty, large in calculation amount and the like are overcome, and the detection efficiency is improved.
Owner:深圳了然视觉科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products