Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

120 results about "Epipolar geometry" patented technology

Epipolar geometry is the geometry of stereo vision. When two cameras view a 3D scene from two distinct positions, there are a number of geometric relations between the 3D points and their projections onto the 2D images that lead to constraints between the image points. These relations are derived based on the assumption that the cameras can be approximated by the pinhole camera model.

Dynamic target position and attitude measurement method based on monocular vision at tail end of mechanical arm

ActiveCN103759716ASimplified Calculation Process MethodOvercome deficienciesPicture interpretationEssential matrixFeature point matching
The invention relates to a dynamic target position and attitude measurement method based on monocular vision at the tail end of a mechanical arm and belongs to the field of vision measurement. The method comprises the following steps: firstly calibrating with a video camera and calibrating with hands and eyes; then shooting two pictures with the video camera, extracting spatial feature points in target areas in the pictures by utilizing a scale-invariant feature extraction method and matching the feature points; resolving a fundamental matrix between the two pictures by utilizing an epipolar geometry constraint method to obtain an essential matrix, and further resolving a rotation transformation matrix and a displacement transformation matrix of the video camera; then performing three-dimensional reconstruction and scale correction on the feature points; and finally constructing a target coordinate system by utilizing the feature points after reconstruction so as to obtain the position and the attitude of a target relative to the video camera. According to the method provided by the invention, the monocular vision is adopted, the calculation process is simplified, the calibration with the hands and the eyes is used, and the elimination of error solutions in the measurement process of the position and the attitude of the video camera can be simplified. The method is suitable for measuring the relative positions and attitudes of stationery targets and low-dynamic targets.
Owner:TSINGHUA UNIV

On-orbit satellite image geometric positioning accuracy evaluation method on basis of multi-source remote sensing data

ActiveCN104574347ACalculating Geometric Positioning AccuracyEvenly distributedImage enhancementImage analysisSensing dataImage resolution
The invention discloses an on-orbit satellite image geometric positioning accuracy evaluation method on the basis of multi-source remote sensing data. The on-orbit satellite image geometric positioning accuracy evaluation method comprises the following steps: Step 1, regulating an image to be evaluated and a reference image into two images under the condition of the same spheroid, datum plane and resolution; Step 2, carrying out down-sampling on the two images and carrying out radiometric enhancement; Step 3, carrying out rough matching on the two images by using an Surf (Speed up robust features) algorithm and removing mismatched dot pairs by epipolar geometry; Step 4, according to a rough matching result, carrying out geometrical relationship compensation on the image to be evaluated and carrying out accurate blocking on the image to be evaluated, which is subjected to geometrical compensation, and the reference image; Step 5, aiming at block pairs of the image to be evaluated and the reference image, carrying out precise matching by using the Surf algorithm and removing mismatched dot pairs by epipolar geometry; Step 6, calculating external geospatial positioning accuracy and calculating internal geospatial positioning accuracy according to the screened direction control dot pairs. The on-orbit satellite image geometric positioning accuracy evaluation method can realize automatic, rapid and accurate evaluation on multisource high-accuracy remote sensing images from different sensors, different spectral regions and different time phases.
Owner:NANJING UNIV OF SCI & TECH

Dual-eye three-dimensional visual measurement method and system fused with IMU calibration

ActiveCN110296691ALow priceSolve the defect of low precisionImage enhancementImage analysisVisual field lossLight beam
The invention belongs to the field of photoelectric detection, and particularly relates to a dual-eye three-dimensional visual measurement method and system fused with IMU for calibration. The methodcomprises the steps of fixedly and respectively connecting two IMUs with cameras, calculating a space conversion relation between the cameras and the IMUs, and determining a rotation matrix between the two cameras according to eulerian angles of (z-y-x) sequences of the IMUs and by a yaw angle differential method proposed by the invention; and determining a translation vector according to an epipolar geometrical principle and the rotation matrix, and optimizing internal parameters of the camera, the rotation matrix and the translation vector by a sparse light beam compensation method to obtainan optimized camera parameter. By the method, a large-size accurately-fabricated calibration plate is not needed, the dual-eye three-dimensional visual calibration can be completed only by measuringlengths of two camera base lines, and the defects that a traditional calibration method is only applicable to an indoor small visual field and a self-calibration method is low in accuracy are overcome. The method can be used in a complicated environment such as outdoors and a large visual field and has relatively high accuracy, robustness and flexibility.
Owner:SHANGHAI UNIV +1

Omnidirectional stereo vision three-dimensional rebuilding method based on Taylor series model

The invention discloses an omni-directional stereo vision three-dimensional reconstruction method based on Taylor series models. The method comprises the following: a step of calibrating a camera, which is to utilize a Taylor series model to calibrate an omni-directional vision sensor so as to obtain internal parameters of the camera; a step of obtaining epipolar geometric relation, which comprises the steps of calculating an essential matrix between binocular omni-directional cameras and extracting the rotation and translation component of the cameras; a step of correcting an outer polar line, which is to correct the outer polar line of a shot omni-directional stereo image so as to allow a corrected polar quadratic curve to coincide with an image scan line; and a step of three-dimensional reconstruction, which is to carry out feature point matching to the corrected stereo image and calculate the three-dimensional coordinates of points according to matching results. The method can be applicable to various omni-directional vision sensors, has the characteristics of wide application range and high precision, and can carry out effective three-dimensional reconstruction under the condition that the parameters of the omni-directional vision sensors are unknown.
Owner:ZHEJIANG UNIV

RANSAC algorithm-based visual localization method

The invention discloses an RANSAC algorithm-based visual localization method which belongs to the field of visual localization. The traditional RANSAC algorithm has more iteration times, large calculation amount and long computation time, so that the visual localization method implemented by this algorithm has the problem of low localization speed. The RANSAC algorithm-based visual localization method comprises the following steps: calculating feature points of images uploaded by a user to be localized by an SURF algorithm and feature point description information; selecting one picture with the most matching points from a database, performing SURF matching on the obtained feature point description information of the images and the feature point description information of the pictures, defining each pair of images and pictures for matching as one pair of matching images, and obtaining a group of matching points after matching each pair of matching images; eliminating mistaken matching points in the matching points of each pair of matching images by the RANSAC algorithm of matching quality, and determining four pairs of matching images with the most correct matching points; calculating a position coordinate of the user by an epipolar geometric algorithm based on the obtained four pairs of matching images, so as to complete the indoor localization.
Owner:严格集团股份有限公司

360-degree three-dimensional reconstruction optimization method based on continuous phase dense matching

The invention discloses a 360-degree three-dimensional reconstruction optimization method based on continuous phase dense matching. According to the method, 360-degree reconstruction of the three-dimensional point cloud of the measured object can be realized rapidly; a reconstruction result is subjected to nonlinear optimization; the method is realized through the following scheme: the method comprises the following steps: firstly, carrying out calibration of a digital projector and a camera; obtaining a corresponding structured light deformation image; calculating the phase order of the deformed fringe pixel points, and determining the epipolar lines of the deformed fringe pixel points in different camera imaging planes of the camera array at the same time, thereby establishing epipolar geometry and equiphase joint constraints, calculating the dense matching of structured light images at different viewing angles, and generating a phase dense matching relationship of the deformed fringe pixels at different angles; initializing a camera transformation matrix and a three-dimensional point cloud initial point by utilizing a phase dense matching relationship and a triangularization principle, constructing an objective function and a graph optimization model thereof, and solving the objective function and the graph optimization model; and performing triangulation surface reconstruction on the optimized three-dimensional point cloud to obtain a complete 360-degree three-dimensional target reconstruction model of the measured target.
Owner:10TH RES INST OF CETC

Vision-based pose stabilization control method of moving trolley

The invention discloses a vision-based pose stabilization control method of a moving trolley, which fully considers about a kinematics model and a dynamics model of a trolley and a camera model. The vision-based pose stabilization control method comprises the following steps of: respectively obtaining an initial image and an expected image at a starting pose position and an expected pose position through a camera, and obtaining an existing image in a movement process in real time; by utilizing an antipode geometric relation and a trilinear restrain relation among shot images, designing three independent ordered kinematics controllers based on Epipolar geometry and 1D trifocal tensor by utilizing a three-step conversion control policy; finally designing a dynamic conversion control rule by taking outputs of the kinematics controllers as the inputs of the kinematics controllers by utilizing an retrieval method so that the trolley quickly and stably reaches an expected pose along a shortest path. The invention solves problems in the traditional vision servo method that the dynamics characteristic of the trolley is not considered during pose stability control and slow servo speed is slow, and the vision-based pose stabilization control method is practical and can enable the trolley to quickly and stably reach the expected pose.
Owner:BEIJING UNIV OF CHEM TECH

Visual loopback detection method based on semantic segmentation and image restoration in dynamic scene

The invention discloses a visual loopback detection method based on semantic segmentation and image restoration in a dynamic scene. The visual loopback detection method comprises the following steps:1) pre-training an ORB feature offline dictionary in a historical image library; 2) acquiring a current RGB image as a current frame, and segmenting out that the image belongs to a dynamic scene areaby using a DANet semantic segmentation network; 3) carrying out image restoration on the image covered by the mask by utilizing an image restoration network; 4) taking all the historical database images as key frames, and performing loopback detection judgment on the current frame image and all the key frame images one by one; 5) judging whether a loop is formed or not according to the similarityand epipolar geometry of the bag-of-words vectors of the two frames of images; and 6) performing judgement. The visual loopback detection method can be used for loopback detection in visual SLAM in adynamic operation environment, and is used for solving the problems that feature matching errors are caused by existence of dynamic targets such as operators, vehicles and inspection robots in a scene, and loopback cannot be correctly detected due to too few feature points caused by segmentation of a dynamic region.
Owner:SOUTHEAST UNIV

Single-point calibration object-based multi-camera calibration

The invention relates to a single-point calibration object-based multi-camera internal and external parameter calibration method and a calibration component. The calibration method comprises the following steps of: acquiring a single calibration point which moves freely and an image point of an L-shaped rigid body which is used for indicating a world coordinate system by using a plurality of infrared cameras which are fixed at different positions in a scene; and uploading the single calibration point and the image point into an upper computer so as to calibrate internal and external parametersof the plurality of cameras by utilizing image point data. According to the method, cameras with common viewpoints can be calibrated in pairs according to a pinhole and distortion camera model and anepipolar geometric constraint relationship between image point pairs. According to the method, a utilized calibration tool is simple to manufacture, and calibration objects do not need to be limitedto move in a common view field of all the cameras, so that the operability is strong; through a multi-camera cascade path determined by utilizing a common view field relationship, more image points can participate in operation, so that the algorithm robustness is better; and through multi-step optimization, calibration parameters can achieve sub pixel-level re-projection errors, so that high-precision demands can be completely satisfied.
Owner:北京轻威科技有限责任公司

Video-acquisition-based Visual Map database establishing method and indoor visual positioning method using database

The invention discloses a video-acquisition-based Visual Map database establishing method and an indoor visual positioning method using a database, relates to the field of indoor positioning and navigation, and aims to solve the problems of low indoor visual positioning accuracy, high time consumption and high labor consumption of an existing method. The indoor visual positioning method using the database is characterized in that a platform carrying a video acquisition device is used for recording videos in the constant-speed linear motion process on the basis of the quickly established video-based Visual Map database; the acquired videos are processed for recording coordinate position information and image matching information of frames of the videos; in an on-line positioning stage, a system is used for roughly matching an image uploaded by a positioned user with the video-based Visual Map database by using a hash value which is obtained by calculating by using a perceptual hash algorithm, and completing the visual indoor positioning by virtue of the roughly matched frames and the uploaded image by using an SURF algorithm and a corresponding epipolar geometry algorithm. The indoor visual positioning method is applied to indoor visual positioning places.
Owner:哈尔滨工业大学高新技术开发总公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products