Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

39results about How to "Accurate pose estimation" patented technology

Improved closed-loop detection algorithm-based mobile robot vision SLAM (Simultaneous Location and Mapping) method

The present invention provides an improved closed-loop detection algorithm-based mobile robot vision SLAM (Simultaneous Location and Mapping) method. The method includes the following steps that: S1,Kinect is calibrated through a using the Zhang Dingyou calibration method; S2, ORB feature extraction is performed on acquired RGB images, and feature matching is performed by using the FLANN (Fast Library for Approximate Nearest network); S3, mismatches are deleted, the space coordinates of matching points are obtained, and inter-frame pose transformation (R, t) is estimated through adopting thePnP algorithm; S4, structureless iterative optimization is performed on the pose transformation solved by the PnP; and S5, the image frames are preprocessed, the images are described by using the bagof visual words, and an improved similarity score matching method is used to perform image matching so as to obtain closed-loop candidates, and correct closed-loops are selected; and S6, an image optimization method centering cluster adjustment is used to optimize poses and road signs, and more accurate camera poses and road signs are obtained through continuous iterative optimization. With the method of the invention adopted, more accurate pose estimations and better three-dimensional reconstruction effects under indoor environments can be obtained.
Owner:CHONGQING UNIV OF POSTS & TELECOMM

A method and system for realizing a visual SLAM semantic mapping function based on a cavity convolutional deep neural network

The invention relates to a method for realizing a visual SLAM semantic mapping function based on a cavity convolutional deep neural network. The method comprises the following steps of (1) using an embedded development processor to obtain the color information and the depth information of the current environment via a RGB-D camera; (2) obtaining a feature point matching pair through the collectedimage, carrying out pose estimation, and obtaining scene space point cloud data; (3) carrying out pixel-level semantic segmentation on the image by utilizing deep learning, and enabling spatial pointsto have semantic annotation information through mapping of an image coordinate system and a world coordinate system; (4) eliminating the errors caused by optimized semantic segmentation through manifold clustering; and (5) performing semantic mapping, and splicing the spatial point clouds to obtain a point cloud semantic map composed of dense discrete points. The invention also relates to a system for realizing the visual SLAM semantic mapping function based on the cavity convolutional deep neural network. With the adoption of the method and the system, the spatial network map has higher-level semantic information and better meets the use requirements in the real-time mapping process.
Owner:EAST CHINA UNIV OF SCI & TECH

Object posture estimation/correction system using weight information

An object pose estimating and matching system is disclosed for estimating and matching the pose of an object highly accurately by establishing suitable weighting coefficients, against images of an object that has been captured under different conditions of pose, illumination. Pose candidate determining unit determines pose candidates for an object. Comparative image generating unit generates comparative images close to an input image depending on the pose candidates, based on the reference three-dimensional object models. Weighting coefficient converting unit determines a coordinate correspondence between the standard three-dimensional weighting coefficients and the reference three-dimensional object models, using the standard three-dimensional basic points and the reference three-dimensional basic points, and converts the standard three-dimensional weighting coefficients into two-dimensional weighting coefficients depending on the pose candidates. Weighted matching and pose selecting unit calculates weighted distance values or similarity degrees between said input image and the comparative images, using the two-dimensional weighting coefficients, and selects one of the comparative images whose distance value up to the object is the smallest or whose similarity degree with respect to the object is the greatest, thereby to estimate and match the pose of the object.
Owner:NEC CORP

Position and posture estimation method of driverless car based on distance from point to surface and cross correlation entropy rectification

The invention discloses a position and posture estimation method of a driverless car based on a distance from a point to a surface and cross correlation entropy rectification, which comprises the following steps: firstly, a three-dimensional laser radar is calibrated, and then the acquired three-dimensional laser radar data is subjected to coordinate conversion; point cloud alignment is carried out on the acquired data and the existing map data to obtain a rotary and translation transformation of a rigid body; so that the position and posture of an active moving body are obtained according tothe rotary and translation conversion. According to the invention, by using the three-dimensional laser radar as the data source, the function of estimating the position and posture of the driverlesscar is finished through the steps of coordinate system conversion, data drop sampling, point set rectification and the like. The method can well overcome the influence of weather, light and other environmental factors. Moreover, the error evaluation function based on the distance from the point to the surface and the cross correlation entropy has good resistance to noise and abnormal points, suchas mismatching of the scene and the map description part, dynamic obstacles and the like, therefore, the function of accurate and robust estimation of the position and posture of the driverless car can be achieved.
Owner:XI AN JIAOTONG UNIV

Object six-degree-of-freedom pose estimation method based on color and depth information fusion

The invention relates to an object six-degree-of-freedom pose estimation method based on color and depth information fusion. The object six-degree-of-freedom pose estimation method comprises the following steps of: acquiring a color image and a depth image of a target object, and carrying out instance segmentation on the color image; cutting a color image block containing a target object from thecolor image, and acquiring a target object point cloud from the depth image; extracting color features from the color image block, and combining the color features to the target object point cloud atthe pixel level; carrying out point cloud processing on the target object point cloud to obtain a plurality of point cloud local region features fusing the color information and the depth informationand a global feature, and combining the global feature into the point cloud local region features; and predicting the pose and confidence of one target object by means of each local feature, and taking the pose corresponding to the highest confidence as a final estimation result. Compared with the prior art, color information and depth information are combined, the object pose is predicted by combining the local features and the global features, and the object six-degree-of-freedom pose estimation method has the advantages of being high in robustness, high in accuracy rate and the like.
Owner:TONGJI UNIV

Object posture estimation/correlation system using weight information

An object pose estimating and matching system is disclosed for estimating and matching the pose of an object highly accurately by establishing suitable weighting coefficients, against images of an object that has been captured under different conditions of pose, illumination. Pose candidate determining unit determines pose candidates for an object. Comparative image generating unit generates comparative images close to an input image depending on the pose candidates, based on the reference three-dimensional object models. Weighting coefficient converting unit determines a coordinate correspondence between the standard three-dimensional weighting coefficients and the reference three-dimensional object models, using the standard three-dimensional basic points and the reference three-dimensional basic points, and converts the standard three-dimensional weighting coefficients into two-dimensional weighting coefficients depending on the pose candidates. Weighted matching and pose selecting unit calculates weighted distance values or similarity degrees between said input image and the comparative images, using the two-dimensional weighting coefficients, and selects one of the comparative images whose distance value up to the object is the smallest or whose similarity degree with respect to the object is the greatest, thereby to estimate and match the pose of the object.
Owner:NEC CORP

Stacked object 6D pose estimation method and device based on deep learning

The invention discloses a stacked object 6D pose estimation method and device based on deep learning. The method comprises the steps: inputting a point cloud of scene depth information into a point cloud deep learning network, and extracting the features of the point cloud; learning semantic information of an object to which point cloud features belong, a foreground and a background of a scene and3D translation information of the object to which the point cloud features belong through a multi-layer perceptron, and performing regression to obtain seed points; randomly sampling K points in theseed points, K being greater than the number of to-be-estimated objects, and classifying the seed points by taking the K points as central points; predicting 3D translation, 3D rotation and 6D pose confidence of the object according to the features of each type of points through a multilayer perceptron; According to the predicted 6D pose and the 6D pose confidence coefficient, using a non-maximumsuppression NMS method to obtain a final scene object pose. According to the method, accurate estimation of the pose of the stacked scene is realized end to end, the input is the scene point cloud, the pose of each object in the scene is directly output, and the shielding problem of the stacked objects can be well solved.
Owner:SHENZHEN GRADUATE SCHOOL TSINGHUA UNIV

Unmanned aerial vehicle attitude measurement method based on strapdown inertial navigation and Beidou satellite navigation system

The invention discloses an unmanned aerial vehicle attitude measurement method based on strapdown inertial navigation and a Beidou satellite navigation system, which is used for solving the problem that the measured attitude information error of the existing unmanned aerial vehicle attitude measurement is continuously increased along with time accumulation, and comprises the following steps: based on three sensors in the strapdown inertial navigation system of the unmanned aerial vehicle, measuring the attitude of the unmanned aerial vehicle; respectively measuring the angular velocity, the specific force and the magnetic field intensity of the unmanned aerial vehicle carrier; processing angular velocity data of the three-axis optical fiber gyroscope by using a quaternion Runge-Kutta method, and performing preliminary calculation to obtain estimated values of three attitude angles; and measuring by using the magnetic field values in three directions to obtain another attitude parameter yaw angle. The attitude information measured by the three-axis optical fiber gyroscope is preliminarily corrected by utilizing a Kalman filtering method, and complementary filtering compensation is performed on an inertial device of the unmanned aerial vehicle by introducing speed information measured by a Beidou satellite after difference, so that the unmanned aerial vehicle can still obtain an accurate attitude estimation value under the condition of no GPS navigation information.
Owner:NAT UNIV OF DEFENSE TECH

A monocular vision odometer method adopting deep learning and mixed pose estimation

PendingCN111899280AAccurate Pose Estimation ResultsFixed an issue where it only worked when the camera was moving slowlyImage enhancementImage analysisPattern recognitionTriangulation
The invention discloses a monocular vision odometer method adopting deep learning and mixed pose estimation. The monocular vision odometer method comprises the following steps: estimating an optical flow field between continuous images by utilizing a deep learning neural network, and extracting key point matching pairs from the optical flow field; taking key point matching pairs as input, and according to a 2d-2d pose estimation principle, calculating a rotation matrix and a translation vector preliminarily by using an epipolar geometry method. A monocular image depth field is estimated by using a deep neural network, a geometric theory triangulation method is combined, and the depth field serves as a reference value, thus calculating by using an RANSAC algorithm to obtain an absolute scale, and converting the pose from a normalized coordinate system to a real coordinate system; and when the 2d-2d pose estimation fails or the absolute scale estimation fails, performing pose estimationby using a PnP algorithm by using a 3d-2d pose estimation principle. According to the invention, accurate pose estimation and absolute scale estimation can be obtained, the robustness is good, and thecamera track can be well reproduced in different scene environments.
Owner:HARBIN ENG UNIV

Real-time attitude estimation motion analysis method and system, computer equipment and storage medium

The invention discloses a real-time attitude estimation motion analysis method and system, computer equipment and a storage medium. The method comprises the following steps: acquiring a real-time video of a user; inputting the video frame into a trained double-branch deep network for feature acquisition to obtain a joint point heat map of a human body and an affinity region between joint points; performing non-maximum suppression on multiple peak values of the joint point thermograph, selecting and obtaining a series of candidate joint points, connecting the candidate joint points with one another to form a bipartite graph, and optimizing the bipartite graph; according to the optimized bipartite graph, performing distortion correction on joint pixel points between adjacent video frames when the user moves in real time, calculating limb angle information, and obtaining limb movement data; and after a consulting instruction sent by the user is received, performing motion analysis on thelimb motion data, and outputting a motion analysis result. According to the invention, the image space features can be more effectively reserved, and unnecessary influence when optimal connection is found between the established joint points is eliminated.
Owner:FOSHAN UNIVERSITY

Unmanned aerial vehicle tunnel defect detection method and system

The invention relates to an unmanned aerial vehicle tunnel defect detection method and system, an unmanned aerial vehicle carries an LED module, a camera, a laser radar, an ultrasonic range finder and an IMU, and the method comprises the following steps: collecting images in a tunnel based on the LED module and the camera to obtain a training image set; training by using the training image set to obtain a defect detection model; collecting real-time tunnel images, performing suspected defect detection on the real-time tunnel images through the defect detection model, obtaining unmanned aerial vehicle pose information based on the camera, the laser radar, the ultrasonic range finder and the IMU, and controlling the unmanned aerial vehicle to hover. Compared with the prior art, the LED module is used for supplementing illumination in the tunnel, the IMU, the camera, the laser radar and the ultrasonic range finder are fused to achieve unmanned aerial vehicle pose estimation, the trained defect detection model is used for detecting whether suspected defects exist or not in real time, hovering is carried out after the suspected defects are found, and defect detection is further carried out. Accurate pose estimation and defect detection can be realized in a tunnel which has no GPS signal and is highly symmetrical inside.
Owner:TONGJI UNIV

Unmanned vehicle pose estimation method based on point-to-plane distance and cross-correlation entropy registration

The invention discloses a position and posture estimation method of a driverless car based on a distance from a point to a surface and cross correlation entropy rectification, which comprises the following steps: firstly, a three-dimensional laser radar is calibrated, and then the acquired three-dimensional laser radar data is subjected to coordinate conversion; point cloud alignment is carried out on the acquired data and the existing map data to obtain a rotary and translation transformation of a rigid body; so that the position and posture of an active moving body are obtained according tothe rotary and translation conversion. According to the invention, by using the three-dimensional laser radar as the data source, the function of estimating the position and posture of the driverlesscar is finished through the steps of coordinate system conversion, data drop sampling, point set rectification and the like. The method can well overcome the influence of weather, light and other environmental factors. Moreover, the error evaluation function based on the distance from the point to the surface and the cross correlation entropy has good resistance to noise and abnormal points, suchas mismatching of the scene and the map description part, dynamic obstacles and the like, therefore, the function of accurate and robust estimation of the position and posture of the driverless car can be achieved.
Owner:XI AN JIAOTONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products