Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

589 results about "Homography" patented technology

In projective geometry, a homography is an isomorphism of projective spaces, induced by an isomorphism of the vector spaces from which the projective spaces derive. It is a bijection that maps lines to lines, and thus a collineation. In general, some collineations are not homographies, but the fundamental theorem of projective geometry asserts that is not so in the case of real projective spaces of dimension at least two. Synonyms include projectivity, projective transformation, and projective collineation.

Stereoscopic image aligning apparatus, stereoscopic image aligning method, and program of the same

A stereoscopic image aligning apparatus (200) automatically aligns image pairs for stereoscopic viewing in a shorter amount of time than conventional apparatuses, which is applicable to image pairs captured by a single sensor camera or a variable baseline camera, without relying on camera parameters. The stereoscopic image aligning apparatus (200) includes: an image pair obtaining unit (205) obtaining an image pair including a left-eye image and a right-eye image corresponding to the left-eye image; a corresponding point detecting unit (252) detecting a corresponding point representing a set of a first point included in a first image that is one of the images of the image pair and a second point included in a second image that is the other of the images of the image pair and corresponding to the first point; a first matrix computing unit (254) computing a homography transformation matrix for transforming the first point such that a vertical parallax between the first and second points is smallest and an epipolar constraint is satisfied; a transforming unit (260) transforming the first image using the homography transformation matrix; and an output unit (210) outputting: a third image that is the transformed first image; and the second image.
Owner:PANASONIC CORP

Improved method of RGB-D-based SLAM algorithm

InactiveCN104851094AMatching result optimizationHigh speedImage enhancementImage analysisPoint cloudEstimation methods
Disclosed in the invention is an improved method of a RGB-D-based simultaneously localization and mapping (SLAM) algorithm. The method comprises two parts: a front-end part and a rear-end part. The front-end part is as follows: feature detection and descriptor extraction, feature matching, motion conversion estimation, and motion conversion optimization. And the rear-end part is as follows: a 6-D motion conversion relation initialization pose graph obtained by the front-end part is used for carrying out closed-loop detection to add a closed-loop constraint condition; a non-linear error function optimization method is used for carrying out pose graph optimization to obtain a global optimal camera pose and a camera motion track; and three-dimensional environment reconstruction is carried out. According to the invention, the feature detection and descriptor extraction are carried out by using an ORB method and feature points with illegal depth information are filtered; bidirectional feature matching is carried out by using a FLANN-based KNN method and a matching result is optimized by using homography matrix conversion; a precise inliners matching point pair is obtained by using an improved RANSAC motion conversion estimation method; and the speed and precision of point cloud registration are improved by using a GICP-based motion conversion optimization method.
Owner:XIDIAN UNIV

High-precision projector and camera calibration system and method

The invention discloses a high-precision projector and camera calibration system and method. The high-precision projector and camera calibration method comprises the steps that a camera is calibrated through a camera calibration method, so that camera internal parameters are obtained; a pure white pattern is projected onto a calibration board, the pattern and a pattern of the calibration board are overlaid, and a calibration area image is captured; image distortion is corrected through the camera internal parameters, and then angle point coordinates in the calibration area image are extracted; a homography matrix between a camera image plane and a calibration board plane is estimated according to the correspondence of angle points; different specific chessboard patterns are projected onto the calibration board in sequence, the chessboard patterns and the pattern of the calibration board are overlaid respectively, and calibration area images are captured; differential processing and filtering processing are carried out on the calibration area images, and then angle point coordinates on the calibration board plane are extracted; an average value of the angle point coordinates is obtained, and then the angle points are mapped to the calibration board plane through the homography matrix; the above steps are repeatedly executed according to the obtaining situations of the angle points. A projector is calibrated through the camera calibration method. According to the high-precision projector and camera calibration system and method, precision of an optical three-dimensional measuring system is improved.
Owner:TSINGHUA UNIV

SAR image registration method based on SIFT and normalized mutual information

ActiveCN103839265AShorten the timeEnsure follow-up registration accuracyImage analysisFeature vectorNormalized mutual information
The invention provides an SAR image registration method based on SIFT and normalized mutual information. The method includes the steps that firstly, a standard image I1 and an image to be registered I2 are input and are respectively pre-processed; secondly, features of the pre-processed image I1 and features of the pre-processed image I2 are extracted according to the MM-SIFT method to acquire initial feature point pairs Fc and SIFT feature vectors Fv1 and Fv2; thirdly, initial matching is carried out through the Fv1 and the Fv2; fourthly, the Fc is screened for the second time according to the RANSAC strategy of a homography matrix model, final correct matching point pairs Fm are acquired, and a registration parameter pr is worked out according to the least square method; fifthly, I2 is subjected to space conversion through affine transformation, and a roughly-registered image I3 is acquired through interpolation and resampling; sixthly, pr serves as the initial value of normalization information registration, I1 and I2 are subjected to fine registration through the normalized mutual information method, a final registration parameter pr1 is worked out, and a registered image I4 is output. The method can be quickly, effectively and stably carried out, and SAR image registration precision and robustness are improved.
Owner:XIDIAN UNIV

Multiple video cameras synchronous quick calibration method in three-dimensional scanning system

A synchronous quick calibration method of a plurality of video cameras in a three-dimensional scanning system, which includes: (1) setting a regular truncated rectangular pyramid calibration object, setting eight calibration balls at the vertexes of the truncated rectangular pyramid, and respectively setting two reference calibration balls at the upper and lower planes; (2) using the video cameras to pick-up the calibration object, adopting the two-threshold segmentation method to respectively obtain the corresponding circles of the upper and lower planes, extracting centers of the circles, obtaining three groups of corresponding relationships between circle center points in the image and the centres of calibration ball in the space, solving the homography matrix to obtain the internal parameter matrix and external parameter matrix and obtaining the distortion coefficient, taking the solved video camera parameter as the initial values, and then using a non-linear optimization method to obtain the optimum solution of a single video camera parameter; (3) obtaining in sequence the external parameter matrix between a plurality of video cameras and a certain video camera in the space, using the polar curve geometric constraint relationship of the binocular stereo vision to establish an optimizing object function, and then adopting a non-linear optimization method to solve to get the optimum solution of the external parameter matrix between two video cameras.
Owner:NANTONG TONGYANG MECHANICAL & ELECTRICAL MFR +1

Quick detecting method for moving objects in dynamic scene

InactiveCN103325112AComplete exercise goalsSatisfy the rapidityImage analysisFrame differenceGray level
Provided is a quick detecting method for moving objects in a dynamic scene. The quick detecting method for the moving objects in the dynamic scene comprises carrying out sequence interframe registration on moving images by utilizing CenSurE feature points and a homography transformation model, obtaining a registering frame of a former frame taking a current frame as reference, carrying out subtraction on the registering frame with the current frame to obtain a frame difference image to generate a foreground mask, building a dynamic background updated in real time according to space distribution information of the foreground mask in the current frame, obtaining a background subtraction image based on a background subtraction method, carrying out statistics on the probability density of the gray level of each pixel in the frame difference image, when the sum of the probability density of the gray level of a pixel is larger than 2phi(k)-1, taking the gray level as a self-adaptation threshold value, judging pixels with values of gray levels larger than the threshold value as foreground pixels, and otherwise judging the pixels as background pixels. The quick detecting method for the moving objects in the dynamic scene can reach the processing speed of 15frame/s and can obtain relatively integral moving objects under the premise that the detecting speed is ensured, and therefore, index requirements such as rapidity, noise immunity, illumination adaptation, target integrity and the like of the detection of the moving objects in the dynamic scene can be met.
Owner:CIVIL AVIATION UNIV OF CHINA

Method for splicing video in real time based on multiple cameras

The invention discloses a method for splicing a video in real time based on multiple cameras, which comprises the following steps of: acquiring synchronous multi-path video data; preprocessing frame images at the same moment; converting a color image into a grayscale image; enhancing the image, and expanding the dynamic range of grayscale of the image by a histogram equalization method; extracting the characteristic points of corresponding frames by using a speeded up robust features (SURF) algorithm; solving matched characteristic point pairs among corresponding frame images of the video by using a nearest neighbor matching method and a random sample consensus matching algorithm; solving an optimal homography matrix of initial k frames of the video; determining splicing overlapping regions according to the matched characteristic point pairs; taking a homography matrix corresponding to a frame with highest overlapping region similarity as the optimal homography matrix, and splicing subsequent video frame scenes; and outputting the spliced video. The method can reduce the calculated amount of splicing the video frame single-frame image, improves the splicing speed of traffic monitoring videos and achieves real-time processing effect.
Owner:RES INST OF HIGHWAY MINIST OF TRANSPORT

Three-dimensional surface generation method

The present invention provides a three-dimensional surface generation method that directly and efficiently generates a three-dimensional surface of the object surface from multiple images capturing a target object.The three-dimensional surface generation method of the present invention sets one image as a basis image from multiple images obtained by capturing the target object from different viewpoint positions and sets other images as reference images, and then generates two-dimensional triangle meshes on the basis image. Next, the method of the present invention sets a distance between a vector whose elements are pixel values of an image obtained by deforming the reference image by a homography determined by an all-vertices depth parameter of meshes and camera parameters and a vector whose elements are pixel values of the basis image, as a term of a cost function, and computes the all-vertices depth parameter that a value of the cost function becomes smallest by iteratively performing the computation of the small variation of the all-vertices depth parameter and the update of the current value of the all-vertices depth parameter by using an optimization method that sets the multiple images, the camera parameters and the initial value of the all-vertices depth parameter as inputs till a predetermined condition is satisfied.
Owner:TOKYO INST OF TECH

Scene matching/visual odometry-based inertial integrated navigation method

The invention relates to a scene matching / visual odometry-based inertial integrated navigation method. The method comprises the following steps: calculating the homography matrix of an unmanned plane aerial photography real time image sequence according to a visual odometry principle, and carrying out recursive calculation by accumulating a relative displacement between two continuous frames of real time graph to obtain the present position of the unmanned plane; introducing an FREAK characteristic-based scene matching algorithm because of the accumulative error generation caused by the increase of the visual odometry navigation with the time in order to carry out aided correction, and carrying out high precision positioning in an adaption zone to effectively compensate the accumulative error generated by the long-time work of the visual odometry navigation, wherein the scene matching has the advantages of high positioning precision, strong automaticity, anti-electromagnetic interference and the like; and establishing the error model of the inertial navigation system and a visual data measuring model, carrying out Kalman filtering to obtain an optimal estimation result, and correcting the inertial navigation system. The method effectively improves the navigation precision, and is helpful for improving the autonomous flight capability of the unmanned plane.
Owner:深圳市欧诺安科技有限公司

Multi-target positioning method based on camera network

The invention discloses a multi-target positioning method based on a camera network and belongs to the technical field of multimedia sensor networks. The multi-target positioning method comprises steps as follows: firstly, at the initialization stage, camera network initialization is completed through four steps of establishment of a camera unit, computation of camera homography conversion, computation of camera overlapping vision fields and computation of a camera mapping model; secondary, at the target positioning stage, target detection and tracking of a single camera are completed, and target matching is realized through comprehensive utilization of the topological relation among the cameras, as well as geometric constraint and target characteristic information; finally, physical location of targets is computed by the aid of the camera model, and multi-target positioning is realized. According to the multi-target positioning method based on the camera network, stable tracking of multiple targets can be realized, and the method has the characteristics of low cost, high positioning accuracy, stable operation and the like and has broad application prospect in the fields of battlefield reconnaissance, safety monitoring, boundary protection and the like.
Owner:THE 28TH RES INST OF CHINA ELECTRONICS TECH GROUP CORP

A fast monocular vision odometer navigation and positioning method combining a feature point method and a direct method

ActiveCN109544636AAccurate Camera PoseFeature Prediction Location OptimizationImage enhancementImage analysisOdometerKey frame
The invention discloses a fast monocular vision odometer navigation and positioning method fusing a feature point method and a direct method, which comprises the following steps: S1, starting the vision odometer and obtaining a first frame image I1, converting the image I1 into a gray scale image, extracting ORB feature points, and constructing an initialization key frame; 2, judging whether thatinitialization has been carry out; If it has been initialized, it goes to step S6, otherwise, it goes to step S3; 3, defining a reference frame and a current frame, extracting ORB feature and matchingfeatures; 4, simultaneously calculating a homography matrix H and a base matrix F by a parallel thread, calculating a judgment model score RH, if RH is great than a threshold value, selecting a homography matrix H, otherwise selecting a base matrix F, and estimating a camera motion according to that selected model; 5, obtaining that pose of the camera and the initial 3D point; 6, judging whetherthat feature point have been extracted, if the feature points have not been extracted, the direct method is used for tracking, otherwise, the feature point method is used for tracking; S7, completingthe initial camera pose estimation. The invention can more precisely carry out navigation and positioning.
Owner:GUANGZHOU UNIVERSITY

A real time panorama video splicing method based on ORB characteristics and an apparatus

The invention discloses a real time panorama video splicing method based on ORB characteristics. The real time panorama video splicing method based on the ORB characteristics comprises the following steps: acquisition of multipath synchronized video data is started; pretreatment is carried out on images in various paths at a same moment, and color images are changed into gray scale images of 256 levels, and a de-noising processing is carried out on the images through employing a Gaussian filter; the ORB feature extraction algorithm is employed to carry out feature point extraction on the images in the various paths at the same moment, and ORB characteristic vectors of the feature points are calculated; through the adoption of the nearest neighborhood matching method and the RANSAC (random sample consensus) matching method to determine a homography matrix array between corresponding frames of the synchronized videos; frame scene splicing is carried out according to the homography matrix array; and finally spliced videos are output. The real time panorama video splicing method based on ORB characteristics and the apparatus are advantageous in that: the feature extraction speed and the coupling effect are improved in the image splicing process.
Owner:CENT SOUTH UNIV

Multi-camera-based multi-objective positioning tracking method and system

The invention discloses a multi-camera-based multi-objective positioning tracking method. The method is characterized by comprising the following steps: installing a plurality of cameras at a plurality of visual angles firstly, planning a public surveillance area for the cameras, and calibrating a plurality of height levels; sequentially implementing the steps of foreground extraction, homography matrix calculation, foreground likelihood fusion and multi-level fusion; extracting positioning information which is based on selected a plurality of height levels and obtained in the step of foreground likelihood fusion; processing the positioning information of each level by using the shortest path algorithm so as to obtain the tracking paths of the levels; and after combining with the processing results of foreground extraction, completing the multi-objective three-dimensional tracking. By using the method disclosed by the invention, in the process of tracking, the vanishing points of the plurality of cameras are not required to be calculated, and a codebook model is introduced for the first time for solving the multi-objective tracking problem, thereby improving the accuracy of tracking; and the method has the characteristics of good stability, good instantaneity and high precision.
Owner:DALIAN NATIONALITIES UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products