Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

126 results about "3D pose estimation" patented technology

3D pose estimation is the problem of determining the transformation of an object in a 2D image which gives the 3D object. One of the requirements of 3D pose estimation arises from the limitations of feature-based pose estimation. There exist environments where it is difficult to extract corners or edges from an image. To circumvent these issues, the object is dealt with as a whole in noted techniques through the use of free-form contours.

Guidance method based on 3D-2D pose estimation and 3D-CT registration with application to live bronchoscopy

A method provides guidance to the physician during a live bronchoscopy or other endoscopic procedures. The 3D motion of the bronchoscope is estimated using a fast coarse tracking step followed by a fine registration step. The tracking is based on finding a set of corresponding feature points across a plurality of consecutive bronchoscopic video frames, then estimating for the new pose of the bronchoscope. In the preferred embodiment the pose estimation is based on linearization of the rotation matrix. By giving a set of corresponding points across the current bronchoscopic video image, and the CT-based virtual image as an input, the same method can also be used for manual registration. The fine registration step is preferably a gradient-based Gauss-Newton method that maximizes the correlation between the bronchoscopic video image and the CT-based virtual image. The continuous guidance is provided by estimating the 3D motion of the bronchoscope in a loop. Since depth-map information is available, tracking can be done by solving a 3D-2D pose estimation problem. A 3D-2D pose estimation problem is more constrained than a 2D-2D pose estimation problem and does not suffer from the limitations associated with computing an essential matrix. The use of correlation-based cost, instead of mutual information as a registration cost, makes it simpler to use gradient-based methods for registration.
Owner:PENN STATE RES FOUND

Guidance method based on 3D-2D pose estimation and 3D-CT registration with application to live bronchoscopy

A method provides guidance to the physician during a live bronchoscopy or other endoscopic procedures. The 3D motion of the bronchoscope is estimated using a fast coarse tracking step followed by a fine registration step. The tracking is based on finding a set of corresponding feature points across a plurality of consecutive bronchoscopic video frames, then estimating for the new pose of the bronchoscope. In the preferred embodiment the pose estimation is based on linearization of the rotation matrix. By giving a set of corresponding points across the current bronchoscopic video image, and the CT-based virtual image as an input, the same method can also be used for manual registration. The fine registration step is preferably a gradient-based Gauss-Newton method that maximizes the correlation between the bronchoscopic video image and the CT-based virtual image. The continuous guidance is provided by estimating the 3D motion of the bronchoscope in a loop. Since depth-map information is available, tracking can be done by solving a 3D-2D pose estimation problem. A 3D-2D pose estimation problem is more constrained than a 2D-2D pose estimation problem and does not suffer from the limitations associated with computing an essential matrix. The use of correlation-based cost, instead of mutual information as a registration cost, makes it simpler to use gradient-based methods for registration.
Owner:PENN STATE RES FOUND

Indoor localization system for mobile robot and calculation method thereof

The invention discloses an indoor localization system for a mobile robot. The indoor localization system for the mobile robot, which improves localization accuracy, is designed on the basis of UWB (ultra wide band) technology combined with information of a motor encoder. A localization calculation method provided in the invention is a localization calculation method for the indoor mobile robot, and includes five steps: 1, confirming an origin of coordinates, starting a base station and the indoor mobile robot, and using a pose 2 of the indoor mobile robot, measured when starting the indoor mobile robot, as an initial pose; 2, obtaining an instant pose 1 of the mobile robot at moment t through the motor encoder; 3, receiving a high frequency electromagnetic pulse transmitted by the base station through a label, and confirming the self position of the mobile robot through a calculation method; 4, using a course angle measured by an electronic compass for augmenting the position of the mobile robot, obtained in the step 3, and obtaining an initial pose 2 at the moment t; 5, using the calculation method to fuse the initial pose 1 and the initial pose 2, obtaining pose estimation of the moment t, and saving the pose estimation and returning to the step 2.
Owner:UNIV OF SHANGHAI FOR SCI & TECH

A method for estimating that posture of a three-dimensional human body based on a video stream

A method for estimating that posture of a three-dimensional human body based on a video stream. The method based on depth learning is used to estimate 3D pose of human body in video stream, which avoids many defects caused by two-dimensional vision analysis errors, and makes full use of the temporal relationship between video frames to improve the accuracy and real-time of 3D pose inference results of video stream. The method includes, for n(n >= 2) th frame of video, (1) inputting two-dimensional image of current frame and using shallow neural network module to generate image shallow map; 2)inputting a two-dimensional joint thermodynamic map of the human body generated by the (n-1)th frame and an image shallow map generated by the current frame to an LSTM module to generate a deep-levelcharacteristic map; 3) outputting the deep image characteristic map generated by the current frame to the residual module to generate a two-dimensional human body joint thermodynamic map of the current frame; 4) outputting the human body two-dimensional joint thermodynamic map of the current frame to a three-dimensional joint inference module for carrying out two-dimensional to three-dimensionalspatial mapping; and superimposing the three-dimensional human body joint thermodynamic map generated in each frame to generate a video stream of three-dimensional human body posture estimation.
Owner:QINGDAO RES INST OF BEIHANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products