Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

113 results about "Visual odometry" patented technology

In robotics and computer vision, visual odometry is the process of determining the position and orientation of a robot by analyzing the associated camera images. It has been used in a wide variety of robotic applications, such as on the Mars Exploration Rovers.

Scene matching/visual odometry-based inertial integrated navigation method

The invention relates to a scene matching / visual odometry-based inertial integrated navigation method. The method comprises the following steps: calculating the homography matrix of an unmanned plane aerial photography real time image sequence according to a visual odometry principle, and carrying out recursive calculation by accumulating a relative displacement between two continuous frames of real time graph to obtain the present position of the unmanned plane; introducing an FREAK characteristic-based scene matching algorithm because of the accumulative error generation caused by the increase of the visual odometry navigation with the time in order to carry out aided correction, and carrying out high precision positioning in an adaption zone to effectively compensate the accumulative error generated by the long-time work of the visual odometry navigation, wherein the scene matching has the advantages of high positioning precision, strong automaticity, anti-electromagnetic interference and the like; and establishing the error model of the inertial navigation system and a visual data measuring model, carrying out Kalman filtering to obtain an optimal estimation result, and correcting the inertial navigation system. The method effectively improves the navigation precision, and is helpful for improving the autonomous flight capability of the unmanned plane.
Owner:深圳市欧诺安科技有限公司

Sparse direct method-based monocular visual odometry (VO) method of quadrotor unmanned-aerial-vehicle

The invention discloses a sparse direct method-based monocular visual odometry (VO) method of a quadrotor unmanned-aerial-vehicle. The method is characterized by: carrying out depth estimation on a key frame, wherein feature points of the key frame are determined by a feature point method, an eigenmatrix between two adjacent frames is calculated, the eigenmatrix is decomposed and a rotation matrix and a translation matrix between the two adjacent frames are calculated to obtain an external parameter matrix, and then depths of the feature points are calculated according to a triangulation method; and after obtaining depth values of the feature points, obtaining the pose of the quadrotor unmanned-aerial-vehicle by solving through a sparse-matrix direct method, and carrying out motion estimation on all frames, wherein sparse feature points are extracted, the direct method is used to calculate a position of each feature point in the next frame, and grayscale information of each pixel point in pixel blocks which have a fixed size and are around the feature points is utilized to optimize grayscale differences between the two adjacent frames to obtain the motion pose of a camera. The method has the advantages that cumulative errors are avoided, higher accuracy is maintained for a long period, and the calculation amount can also be reduced.
Owner:NINGBO UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products