Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

39 results about "Global transformation" patented technology

Method and device for jointly calibrating robot and three-dimensional sensing component

The invention relates to a method and device for jointly calibrating a robot and a three-dimensional sensing component, a computer device and a storage medium. The method comprises: acquiring target reference point information corresponding to a robot in different attitudes; calibrating the parameter of the three-dimensional sensing component; determining a first coordinate system transformation relation matching the three-dimensional depth information under different robot attitudes; determining a second coordinate system transformation relation of the space attitudes of the robot under different rotation angles; and according to the plurality of transformation relations and the acquired robot attitude information and rotation angle information, calculating a global transformation matrixfor global calibration and optimization. The method, by calibrating the parameter and obtaining the transformation relation between multiple coordinate systems, performs multi-view three-dimensional reconstruction on an object under a finite field of view angle, further obtains the field of view information of different rotation angles of the robot, calculates the global transformation matrix, andperforms global calibration so as to fully realize the multi-view depth data information fusion, and achieve an effect of improving the matching accuracy.
Owner:SHENZHEN ESUN DISPLAY +1

Vehicle-mounted navigator, vehicle state dynamic image display method and system thereof as well as storage medium

The invention discloses a vehicle state dynamic image display method applied to a vehicle-mounted navigator. The vehicle state dynamic image display method comprises the following steps: obtaining 3Dmodel data, wherein the 3D model data comprises base data used for rendering 3D objects and dynamic image data corresponding to different nodes, and each dynamic image data comprises a key frame dataarray; obtaining state information of a target node of the vehicle; determining a global transformation matrix of the target node based on a key frame interpolation calculation function and dynamic data corresponding to the target node according to the state information; rendering by using the determined global transformation matrix and the 3D model data to determine image data of the target node;and controlling a display device to display the image data. The vehicle state dynamic image display method is capable of helping the user to see the vehicle state in all directions and in the whole process and obtaining smoother and more continuous dynamic images. The invention further discloses a vehicle-mounted navigator and a vehicle state dynamic image display system thereof as well as a storage medium, which have the corresponding effects.
Owner:SHENZHEN ROADROVER TECH

Computer vision based non-contact type data transmission method

The invention discloses a computer vision based non-contact type data transmission method. The computer vision based non-contact type data transmission method comprises the steps of 1, coding and displaying an image sequence, to be specific, 101, generating a standard checking image sequence, 102, building a standard data image sequence, and 103, displaying the image sequence; and 2, decoding the image sequence, to be specific, 201, obtaining an actual checking image sequence, 202, extracting an actual mark point of the actual checking image sequence and calculating a global transformation homographic matrix, 203, extracting a vertex coordinate of each grid of the actual checking image sequence, 204, resolving each grid affine transformation parameter of the actual checking image sequence and determining binary information carried by each grid of the actual checking image sequence, 205, obtaining an actual data image sequence, and 206, decoding the actual data image sequence. As a camera lens is utilized for acquiring computer image information, single-way transmission of computer intranet and extranet information is realized, and efficient communication of computer information between a secret-relating network and a non-secret-involved network can be solved efficiently.
Owner:XIAN UNIV OF SCI & TECH

Three-dimensional image splicing method for eliminating motion ghosting

The invention discloses a three-dimensional image splicing method for eliminating motion ghosting, which comprises the following steps of: acquiring two groups of images by adopting a binocular camera, and respectively calculating the view difference of the two groups of images; extracting feature points of each group of images, describing and matching the feature points, and then screening out wrong matching to obtain an accurate feature point pair set; setting a new feature constraint condition according to the view difference and the feature point pair set, and performing global transformation on the second group of images by adopting the obtained homography transformation which optimizes the feature constraint condition; determining an overlapping region of the first group of images and the transformed second group of images, finding out a moving object of the overlapping region, and designing a weighted fusion coefficient according to the relative position of the moving object andthe virtual splicing line; and respectively fusing and splicing the left view and the right view, and synthesizing the spliced left view and right view to obtain a final stereo image. According to the stereo image splicing method provided by the invention, high-quality splicing is realized for the stereo image containing the moving object.
Owner:SHENZHEN GRADUATE SCHOOL TSINGHUA UNIV

A non-contact data transmission method based on computer vision

The invention discloses a computer vision based non-contact type data transmission method. The computer vision based non-contact type data transmission method comprises the steps of 1, coding and displaying an image sequence, to be specific, 101, generating a standard checking image sequence, 102, building a standard data image sequence, and 103, displaying the image sequence; and 2, decoding the image sequence, to be specific, 201, obtaining an actual checking image sequence, 202, extracting an actual mark point of the actual checking image sequence and calculating a global transformation homographic matrix, 203, extracting a vertex coordinate of each grid of the actual checking image sequence, 204, resolving each grid affine transformation parameter of the actual checking image sequence and determining binary information carried by each grid of the actual checking image sequence, 205, obtaining an actual data image sequence, and 206, decoding the actual data image sequence. As a camera lens is utilized for acquiring computer image information, single-way transmission of computer intranet and extranet information is realized, and efficient communication of computer information between a secret-relating network and a non-secret-involved network can be solved efficiently.
Owner:XIAN UNIV OF SCI & TECH

Multi-view point cloud registration method based on K-means clustering center local curved surface projection

ActiveCN113610903AReduced cloud resolution drop issuesImprove the accuracy of multi-view registrationImage enhancementImage analysisPoint cloudGlobal transformation
The invention discloses a multi-view point cloud registration method based on K-means clustering center local curved surface projection. The method includes: giving an initial global transformation matrix; calculating a multi-scale feature descriptor and a normal vector of each frame of point cloud; determining clustering attribution of each point in a complete point cloud; calculating a multi-scale feature descriptor and a normal vector to obtain a registration corresponding point of the original point relative to the complete point cloud; carrying out bidirectional interpolation projection on the local MLS curved surface, and if a rigid body transformation consistency constraint condition is not met, eliminating the point pair, and obtaining a final matching point set of the single-frame point cloud; if the rigid body transformation consistency constraint condition is met, taking the projection point and the corresponding point thereof as a correct corresponding point pair; registering the N view point clouds in sequence; and achieving global optimization. According to the invention, the problem that the point cloud resolution is reduced and the registration precision is not high due to the down-sampling operation of the laser point cloud with sparsity originally is solved, that is, the sampling sparsity of the three-dimensional laser point cloud is solved.
Owner:HARBIN INST OF TECH

Unmanned aerial vehicle video stabilization method and apparatus in low-altitude flight scene

The invention provides an unmanned aerial vehicle video stabilization method and apparatus in a low-altitude flight scene. The method comprises the following steps: obtaining a to-be-stabilized unmanned aerial vehicle video, extracting feature points from each frame in the video, connecting the feature points to obtain tracks of the feature points, dividing the tracks into long tracks and short tracks according to thresholds, calculating a global transformation matrix corresponding to each frame picture based on the long tracks and smooth long tracks, obtaining smooth short tracks in combination with the global transformation matrix corresponding to each frame picture, the short tracks and a low-pass filter, and finally performing calculation by adopting a multi-plane optimization method in combination with the long tracks and the short tracks to obtain a stable unmanned aerial vehicle video. According to the unmanned aerial vehicle video stabilization method and apparatus, the tracks of the feature points are classified, when the stabilization effect of areas with sufficient feature points is ensured, the areas with insufficient feature points is stabilized, therefore the influence caused by the instable edge of the multi-plane optimization method in the low-altitude flight scene can be solved to a certain extent, and thus the stabilization effect of the unmanned aerial vehicle video is improved.
Owner:BEIHANG UNIV

Non-rigid registration method and system for maximum moment and space consistency of multimode image

ActiveCN114693755ANon-rigid registration is accurateResolve distortionImage enhancementImage analysisMorphingData set
The invention discloses a multimode image maximum moment and space consistency non-rigid registration method and system, and the method comprises the steps: constructing a global transformation sub-network and a deformation attention sub-network, and constructing a multimode image non-rigid registration network capable of end-to-end training in combination with a position transformation grid and a pixel resampling layer; constructing a loss function for the multimode image non-rigid registration network; and constructing a training data set by using the multimode image, and training the multimode image non-rigid registration network by using the constructed training data set and the loss function. According to the method, the distorted image without geometric correction can be directly registered, the problem of local distortion of the multimode image is well solved, accurate registration of the multimode image is realized, reliable support can be provided for accurate image fusion and accurate target detection, and the method can be applied to the application fields of natural disaster monitoring, resource investigation and exploration, accurate target strike and the like, and has wide application prospects. The system has the advantages of intelligent manufacturing, rescue and relief work, remote sensing monitoring and the like, and is wide in application range.
Owner:HUNAN UNIV

Method and system for global registration between 3D scans

The invention provides a computer-implemented method and a computerized device for global registration between a first point cloud and a second point cloud obtained by a scanning device on the same spatial scene of two independent instances. The method includes the step: extracting a first set of discriminant line pairs from the first point cloud and extracting a second set of discriminant line pairs from the second point cloud, wherein the discriminant line pairs have a higher discriminant power than randomly selected line pairs. In some embodiments, a plurality of matching line pair groups between the two discriminating line pairs are then determined according to a threshold criterion related to the relationship between the lines, the geometry of the lines, and the location of the lines, and a compass angle criterion related to the compass error of the scanning device. The method further comprises the steps of finding the most reliable corresponding relation between the two point clouds through voting, and then calculating a global transformation matrix; and finally, aligning the two point clouds by using a global transformation matrix. Embodiments of the present invention provide accurate and effective registration, particularly for building construction applications.
Owner:HONG KONG APPLIED SCI & TECH RES INST
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products