Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

2547 results about "Transformation matrix" patented technology

In linear algebra, linear transformations can be represented by matrices. for some m×n matrix A, called the transformation matrix of T. Note that A has m rows and n columns, whereas the transformation T is from ℝⁿ to ℝᵐ. There are alternative expressions of transformation matrices involving row vectors that are preferred by some authors.

Stereoscopic image aligning apparatus, stereoscopic image aligning method, and program of the same

A stereoscopic image aligning apparatus (200) automatically aligns image pairs for stereoscopic viewing in a shorter amount of time than conventional apparatuses, which is applicable to image pairs captured by a single sensor camera or a variable baseline camera, without relying on camera parameters. The stereoscopic image aligning apparatus (200) includes: an image pair obtaining unit (205) obtaining an image pair including a left-eye image and a right-eye image corresponding to the left-eye image; a corresponding point detecting unit (252) detecting a corresponding point representing a set of a first point included in a first image that is one of the images of the image pair and a second point included in a second image that is the other of the images of the image pair and corresponding to the first point; a first matrix computing unit (254) computing a homography transformation matrix for transforming the first point such that a vertical parallax between the first and second points is smallest and an epipolar constraint is satisfied; a transforming unit (260) transforming the first image using the homography transformation matrix; and an output unit (210) outputting: a third image that is the transformed first image; and the second image.
Owner:PANASONIC CORP

Visual ranging-based simultaneous localization and map construction method

The invention provides a visual ranging-based simultaneous localization and map construction method. The method includes the following steps that: a binocular image is acquired and corrected, so that a distortion-free binocular image can be obtained; feature extraction is performed on the distortion-free binocular image, so that feature point descriptors can be generated; feature point matching relations of the binocular image are established; the horizontal parallax of matching feature points is obtained according to the matching relations, and based on the parameters of a binocular image capture system, real space depth is calculated; the matching results of the feature points of a current frame and feature points in a world map are calculated; feature points which are wrongly matched with each other are removed, so that feature points which are successfully matched with each other can be obtained; a transform matrix of the coordinates of the feature points which are successfully matched with each other under a world coordinate system and the three-dimension coordinates of the feature points which are successfully matched with each other under a current reference coordinate system is calculated, and a pose change estimated value of the binocular image capture system relative to an initial position is obtained according to the transform matrix; and the world map is established and updated. The visual ranging-based simultaneous localization and map construction method of the invention has low computational complexity, centimeter-level positioning accuracy and unbiased characteristics of position estimation.
Owner:北京超星未来科技有限公司

Systematic calibration method of welding robot guided by line structured light vision sensor

The invention relates to a systematic calibration method of a welding robot guided by a line structured light vision sensor, which comprises the following steps: firstly, controlling a mechanical arm to change pose, obtaining a round target image through a camera, accomplishing the matching of the round target image and a world coordinate, and then obtaining an internal parameter matrix and an external parameter matrix RT of the camera; secondly, solving a line equation of a line laser bar by Hough transformation, and using the external parameter matrix RT obtained in the first step to obtain a plane equation of the plane of the line laser bar under a coordinate system of the camera; thirdly, calculating to obtain a transformation matrix of a tail end coordinate system of the mechanical arm and a base coordinate system of the mechanical arm by utilizing a quaternion method; and fourthly, calculating a coordinate value of a tail end point of a welding workpiece under the coordinate of the mechanical arm, and then calculating an offset value of the workpiece in the pose combined with the pose of the mechanical arm. The systematic calibration method of the welding robot guided by the line structured light vision sensor is flexible, simple and fast, and is high in precision and generality, good in stability and timeliness and small in calculation amount.
Owner:JIANGNAN UNIV +1

Dynamic target position and attitude measurement method based on monocular vision at tail end of mechanical arm

ActiveCN103759716ASimplified Calculation Process MethodOvercome deficienciesPicture interpretationEssential matrixFeature point matching
The invention relates to a dynamic target position and attitude measurement method based on monocular vision at the tail end of a mechanical arm and belongs to the field of vision measurement. The method comprises the following steps: firstly calibrating with a video camera and calibrating with hands and eyes; then shooting two pictures with the video camera, extracting spatial feature points in target areas in the pictures by utilizing a scale-invariant feature extraction method and matching the feature points; resolving a fundamental matrix between the two pictures by utilizing an epipolar geometry constraint method to obtain an essential matrix, and further resolving a rotation transformation matrix and a displacement transformation matrix of the video camera; then performing three-dimensional reconstruction and scale correction on the feature points; and finally constructing a target coordinate system by utilizing the feature points after reconstruction so as to obtain the position and the attitude of a target relative to the video camera. According to the method provided by the invention, the monocular vision is adopted, the calculation process is simplified, the calibration with the hands and the eyes is used, and the elimination of error solutions in the measurement process of the position and the attitude of the video camera can be simplified. The method is suitable for measuring the relative positions and attitudes of stationery targets and low-dynamic targets.
Owner:TSINGHUA UNIV

Spatial non-cooperative target pose estimation method based on model and point cloud global matching

The invention discloses a spatial non-cooperative target pose estimation method based on model and point clod global matching. The method comprises the steps that target scene point cloud is acquired by using a depth camera, the target scene point cloud acts as data point cloud to be registered after being filtered, and three-dimensional distance transformation is carried out on the target model point cloud; deblurring main directional transformation is carried out on the initial data point cloud to be registered and the target model point cloud, a translation domain is determined, search and registration are carried out in the translation domain and a rotation domain by using a global ICP algorithm, and an initial transformation matrix from a model coordinate system to a camera coordinate system is acquired, namely, the initial pose of a target is acquired; a pose transformation matrix of the pervious frame is enabled to act on data point cloud of the current frame, and registration with a model is carried out by using the ICP algorithm so as to acquire the pose of the current frame; and a rotation angle and a translation amount are calculated from the pose transformation matrix. The method disclosed by the invention has good anti-noise performance and an ability of outputting the target pose in real time, geometric features such as the normal and the curvature of the data point cloud are not required to be calculated, the registration speed is high, and the precision is high.
Owner:NANJING UNIV OF SCI & TECH

Vehicle-mounted SINS/GPS combined navigation system performance reinforcement method

The invention discloses a method for enhancing the performance of a vehicle-mounted SINS / GPS combined navigation system. The invention relates to the technical field of navigation and solves the problems of the prior vehicle-mounted SINS / GPS combined navigation system of low precision and low reliability of the system due to the temporary failure of the GPS. The method comprises the following steps: firstly, judging whether or not the GPS is effective; if the GPS is effective, evaluating and correcting an SINS error by a Kalman filtering method and by using the difference between position and velocity information provided by the GPS and the position and velocity information of the SINS as an observed quantity; if the GPS is noneffective, judging whether or not to stop; if to stop, correcting the SINS error by using a zero velocity update auxiliary SINS; if not to stop, calculating coordinate transformation matrix Cn from a navigation coordinate system to a vehicle body coordinate system by using the attitude angle of the SINS, converting the velocity under the navigation coordinate system into a velocity under the vehicle body coordinate system by using the Cn and creating a vehicle motion constraint measurement equation by using velocity constraint; simplifying the equation according to vehicle motion; and carrying out the velocity composition of the SINS and the vehicle motion constraint and correction by using a vehicle motion constraint auxiliary SINS. The method is used for improving the precision and reliability of the vehicle-mounted SINS / GPS combined navigation system.
Owner:HARBIN INST OF TECH

Three-dimension image processing method, device, storage medium and computer equipment

The invention provides a three-dimension image processing method comprising the following steps: obtaining current image acquisition device position coordinates from a world coordinate system, obtaining a first direction vector and a second direction vector, and using a view transformation algorithm to obtain a transparent matrix; obtaining preset near plane vertex coordinates a near plane distance and a far plane distance from an image acquisition device coordinate system, and using a projection transformation algorithm to obtain a projection matrix; multiplying the transparent matrix with the projection matrix so as to obtain a transformation matrix; multiplying the transformation matrix with texture initial vertex coordinates matched with added augmented reality elements so as to obtaintexture target vertex coordinates matched with the augmented reality elements; using the texture target vertex coordinates to render the augmented reality elements so as to form a three-dimension image. The mobile terminal can rotate to drive the image acquisition device to rotate, so the three-dimension image corresponding to the augmented reality elements can make corresponding rotations, thusincreasing the fusing level between the three-dimension image and a true background image, and improving the image authenticity; the invention also provides a three-dimension image processing device,a storage medium and computer equipment.
Owner:TENCENT TECH (SHENZHEN) CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products