Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

160 results about "Central projection" patented technology

Image processing method for automatic pointer-type instrument reading recognition

The invention discloses an image processing method for automatic pointer-type instrument reading recognition. The method comprises the following steps: (1) Hough circle detection is carried out on the image, a weighted average method is used for positioning the circle center and the radius of a dial, and a dial region square image is extracted; (2) the image is pre-treated, and a binary thinning image of the instrument pointer is extracted; (3) a central projection method is used for determining a pointer angle; (4) a zero graduation line and full graduation line position templates are extracted, and a range starting point and ending point positions are calibrated; (5) by using template matching, zero graduation line and full graduation line angles are obtained; and (6) according to the pointer angle, the zero graduation line angle and the full graduation line angle, the pointer reading is obtained through calculation. Thus, the problem that the instrument dial position on the acquired image is not fixed as the relative position between a camera and the pointer-type instrument is not fixed can be solved, subjective errors as the reading of the instrument is read manually can be eliminated, the efficiency and the precision can be improved, safety of people is ensured, the application range is wide, and robustness is strong.
Owner:NANJING UNIV OF AERONAUTICS & ASTRONAUTICS

Ground laser radar reflection intensity image generation method based on central projection

The invention discloses a ground laser radar reflection intensity image generation method based on central projection. The ground laser radar reflection intensity image generation method based on central projection comprises the following steps that firstly, a ground laser radar is used for obtaining the laser point cloud of a detected object; secondly, the laser source of the ground laser radar serves as a projection center, a central point projection radial is determined through the projection center and the center of the laser point cloud, a plane which is perpendicular to the central point projection radial and passes through the central point serves as a projection plane, the laser point cloud is projected in the projection plane to obtain corresponding projection points; thirdly, a minimum wrapping rectangle and a rectangular graph of all the projection points are built and are divided through grids with the same intervals, and any grid unit of the rectangular graph has corresponding grid units in the minimum wrapping rectangle. For any grid unit of the rectangular graph, the reflection intensity value of the nearest projection point in the minimum wrapping rectangle is converted into a gray value, and the gray value is given to the grid unit.
Owner:BEIJING UNIVERSITY OF CIVIL ENGINEERING AND ARCHITECTURE

Method for generating integrated three-dimensional imaging element images on basis of central projection

The invention discloses a method for generating integrated three-dimensional imaging element images on the basis of central projection. Firstly, a three-dimensional scene is established by using a computer, and a database corresponding to coordinates and information of the three-dimensional scene is obtained; virtual micro lens optical parameters are set to accord with optical parameters of a true micro lens, and are displayed in an optical system of an integrated three-dimensional imaging system; all the coordinates of the three-dimensional scene are converted into world coordinates from local coordinates; the coordinates of viewpoints are determined according to the position of a virtual micro lens camera and the position of a space scene; a virtual 3D target is converted into 2D images according to different viewpoints, and the element images are generated according to the coordinates of the viewpoints. According to the method for generating the integrated three-dimensional imaging element images on the basis of the central projection, the technology of geometric projection and mapping is adopted, virtual micro lens array element images are formed through the computer, the obtained element images have the advantages of being wider in view angle range, higher in resolution and larger in field depth, interference among element image arrays obtained on the basis of micro lens arrays is eliminated, and clear integrated three-dimensional images are reconfigured.
Owner:CHANGCHUN UNIV OF SCI & TECH

Large-area complex-terrain-region unmanned plane sequence image rapid seamless splicing method

Provided is a large-area complex-terrain-region unmanned plane sequence image rapid seamless splicing method which comprises the following steps: to begin with, with air strip arrangement features of unmanned plane image sequence being prior knowledge, carrying out inter-image multiple-overlap SIFT feature point extraction and matching; then, carrying out matching point gross error removing and purifying based on random sample consensus algorithm, and solving transformation parameters of each image in spliced regions in an adjustment manner through an Levenberg-Marquardt algorithm; next, carrying out overlapped region image optimized selection according to the relative position relationship between central projection image point displacement rules and the images, and determining splicing lines; and finally, carrying out image uniform-coloring and fusion at the edge-connection places, and outputting spliced images, thereby realizing mass unmanned plane image seamless splicing. The seamless splicing method helps to improve the extraction efficiency of the SIFT feature points, guarantee the geometric accuracy of the spliced images, and eliminate the tiny color difference at the two sides of the image splicing line, and thus the spliced images with natural color transition and good natural object and landform continuity are obtained.
Owner:SHANDONG LINYI TOBACCO

Navigation multiple spectrum scanner geometric approximate correction method under non gesture information condition

InactiveCN101114022AImprove geographic accuracyFlexible access toUsing optical meansElectromagnetic wave reradiationAviationVertical plane
The invention discloses an avigation multi-spectral scanner geometric rough correction method under the conditions of no gesture information, which comprises the steps that: 1) the multi-spectral scanner data is carried out the coordinate conversion, the measured value in a WGS-84 ground coordinate system is converted to the value in a Gaussian plane Cartesian coordinate; 2) according to a ideal flight model, an one second mode is used for simulating a rolling angle; 3) a curve is fitted according to the height data in the vertical plane, then the flight pitching angle in each updated data point is calculated according to the cutting direction; 4) the central projection constitutive equation is used for obtaining the point coordinates; 5)according to the flight height, the scanning angel and the instantaneous scanning angel, the other points coordinates in the same scanning row are obtained by the scanning mode; 6) a direct method is used for producing a rough correction image. The invention improves the geographical accuracy of the avigation remote sensing, and is an innovation of the remote sensing technology under the conditions of no gesture information and improves the time effectiveness, so the avigation remote sensing technology can be better applied to the production and livelihood of the national economy.
Owner:SECOND INST OF OCEANOGRAPHY MNR

Monocular vision plane distance measuring method free of photo-control

The invention discloses a monocular vision plane distance measuring method free of photo-control. The method comprises the following steps of a measurement system integration stage: fixedly connectinga camera and a laser range finder to form a direct oriented vision measurement system; a measurement system calibration stage: performing integrated calibration on the measurement system to determinecamera internal parameters, distortion aberrations and an eccentric angle between the camera and the laser range finder, and correcting the nonlinear distortion, measured elevation and attitude of the camera; and a plane distance measurement stage: establishing an "image-distance" transformation relationship between the image plane and the object plane based on a central projection imaging principle of the collinearity of the image point, optic center and object point of the camera in an oblique angle of view, and finally realizing the monocular vision plane distance measurement free of photo-control. The method provided by the invention has low hardware cost and operational complexity, greatly reduces the workload in the field by completing the measuring point layout in a few minutes with no need to arrange photo-control points on the plane to be tested, and has universality without depending on the prior knowledge in the scene.
Owner:HOHAI UNIV

Calibration method of coupling position relationship between micro lens array and detector

The invention discloses a calibration method of a coupling position relationship between a micro lens array and a detector. The calibration method of the coupling position relationship between the micro lens array and the detector includes: combining a device formed by coupling the micro lens array and the detector with a front optical system into a light field imaging system; building a mapping equation between the center of micro lenses and a point pi,j on a plane of the detector based on a central projection principle; using a parallel light source to confirm a rough distance between a main lens and the micro lens array; using a uniform area light source to calibrate the light field imaging system, and confirming an actual coordinate of the point pi,j on the plane of the detector; confirming accurate value of L1; using an optimization algorithm to estimate coupling error angles phi, omega, k and d between the micro lens array and the detector. The calibration method of the coupling position relationship between the micro lens array and the detector only needs to roughly confirm a position relationship between the micro lens array and the front optical system in the light field imaging system, cam achieve calibration of parameters of a distance, a rotation angle and the like between the micro lens array and the detector, is simple to use, and facilitates actual operation.
Owner:BEIHANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products