Patents
Literature
Eureka-AI is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Eureka AI

2149 results about "Camera image" patented technology

Machine vision and inertial navigation fusion-based mobile robot motion attitude estimation method

The invention discloses a machine vision and inertial navigation fusion-based mobile robot motion attitude estimation method which comprises the following steps of: synchronously acquiring a mobile robot binocular camera image and triaxial inertial navigation data; distilling front/back frame image characteristics and matching estimation motion attitude; computing a pitch angle and a roll angle by inertial navigation; building a kalman filter model to estimate to fuse vision and inertial navigation attitude; adaptively adjusting a filter parameter according to estimation variance; and carrying out accumulated dead reckoning of attitude correction. According to the method, a real-time expanding kalman filter attitude estimation model is provided, the combination of inertial navigation and gravity acceleration direction is taken as supplement, three-direction attitude estimation of a visual speedometer is decoupled, and the accumulated error of the attitude estimation is corrected; and the filter parameter is adjusted by fuzzy logic according to motion state, the self-adaptive filtering estimation is realized, the influence of acceleration noise is reduced, and the positioning precision and robustness of the visual speedometer is effectively improved.
Owner:ZHEJIANG UNIV

Computer enhanced surgical navigation imaging system (camera probe)

A system and method for navigation within a surgical field are presented. In exemplary embodiments according to the present invention a micro-camera can be provided in a hand-held navigation probe tracked by a tracking system. This enables navigation within an operative scene by viewing real-time images from the viewpoint of the micro-camera within the navigation probe, which are overlaid with computer generated 3D graphics depicting structures of interest generated from pre-operative scans. Various transparency settings of the camera image and the superimposed 3D graphics can enhance the depth perception, and distances between a tip of the probe and any of the superimposed 3D structures along a virtual ray extending from the probe tip can be dynamically displayed in the combined image. In exemplary embodiments of the invention a virtual interface can be displayed adjacent to the combined image on a system display, thus facilitating interaction with various navigation related functions. In exemplary embodiments according to the present invention virtual reality systems can be used to plan surgical approaches with multi-modal CT and MRI data. This allows for generating 3D structures as well as marking ideal surgical paths. The system and method presented thus enable transfer of a surgical planning scenario to a real-time view of an actual surgical field, thus enhancing navigation.
Owner:BRACCO IMAGINIG SPA

Binocular vision obstacle detection method based on three-dimensional point cloud segmentation

The invention provides a binocular vision obstacle detection method based on three-dimensional point cloud segmentation. The method comprises the steps of synchronously collecting two camera images of the same specification, conducting calibration and correction on a binocular camera, and calculating a three-dimensional point cloud segmentation threshold value; using a three-dimensional matching algorithm and three-dimensional reconstruction calculation for obtaining a three-dimensional point cloud, and conducting image segmentation on a reference map to obtain image blocks; automatically detecting the height of a road surface of the three-dimensional point cloud, and utilizing the three-dimensional point cloud segmentation threshold value for conducting segmentation to obtain a road surface point cloud, obstacle point clouds at different positions and unknown region point clouds; utilizing the point clouds obtained through segmentation for being combined with the segmented image blocks, determining the correctness of obstacles and the road surface, and determining position ranges of the obstacles, the road surface and unknown regions. According to the binocular vision obstacle detection method, the camera and the height of the road surface can be still detected under the complex environment, the three-dimensional segmentation threshold value is automatically estimated, the obstacle point clouds, the road surface point cloud and the unknown region point clouds can be obtained through segmentation, the color image segmentation technology is ended, color information is integrated, correctness of the obstacles and the road surface is determined, the position ranges of the obstacles, the road surface and the unknown regions are determined, the high-robustness obstacle detection is achieved, and the binocular vision obstacle detection method has higher reliability and practicability.
Owner:GUILIN UNIV OF ELECTRONIC TECH +1

Train operation fault automatic detection system and method based on binocular stereoscopic vision

The invention discloses a train operation fault automatic detection system and method based on binocular stereoscopic vision, and the method comprises the steps: collecting left and right camera images of different parts of a train based on a binocular stereoscopic vision sensor; achieving the synchronous precise positioning of various types of target regions where faults are liable to happen based on the deep learning theory of a multi-layer convolution neural network or a conventional machine learning method through combining with the left and right image consistency fault (no-fault) constraint of the same part; carrying out the preliminary fault classification and recognition of a positioning region; achieving the synchronous precise positioning of multiple parts in a non-fault region through combining with the priori information of the number of parts in the target regions; carrying out the feature point matching of the left and right images of the same part through employing the technology of binocular stereoscopic vision, achieving the three-dimensional reconstruction, calculating a key size, and carrying out the quantitative description of fine faults and gradually changing hidden faults, such as loosening or playing. The method achieves the synchronous precise detection of the deformation, displacement and falling faults of all big parts of the train, or carries out the three-dimensional quantitative description of the fine and gradually changing hidden troubles, and is more complete, timely and accurate.
Owner:BEIHANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products