Eureka-AI is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Eureka AI

1706 results about "Feature matching" patented technology

Improved method of RGB-D-based SLAM algorithm

Disclosed in the invention is an improved method of a RGB-D-based simultaneously localization and mapping (SLAM) algorithm. The method comprises two parts: a front-end part and a rear-end part. The front-end part is as follows: feature detection and descriptor extraction, feature matching, motion conversion estimation, and motion conversion optimization. And the rear-end part is as follows: a 6-D motion conversion relation initialization pose graph obtained by the front-end part is used for carrying out closed-loop detection to add a closed-loop constraint condition; a non-linear error function optimization method is used for carrying out pose graph optimization to obtain a global optimal camera pose and a camera motion track; and three-dimensional environment reconstruction is carried out. According to the invention, the feature detection and descriptor extraction are carried out by using an ORB method and feature points with illegal depth information are filtered; bidirectional feature matching is carried out by using a FLANN-based KNN method and a matching result is optimized by using homography matrix conversion; a precise inliners matching point pair is obtained by using an improved RANSAC motion conversion estimation method; and the speed and precision of point cloud registration are improved by using a GICP-based motion conversion optimization method.

Unmanned aerial vehicle aerial photography sequence image-based slope three-dimension reconstruction method

InactiveCN105184863AReduce in quantityReduce texture discontinuities3D modellingVisual technologyStructure from motion
The invention relates to an unmanned aerial vehicle aerial photography sequence image-based slope three-dimension reconstruction method. The method includes the following steps that: feature region matching and feature point pair extraction are performed on un-calibrated unmanned aerial vehicle multi-view aerial photography sequence images through adopting a feature matching-based algorithm; the geometric structure of a slope and the motion parameters of a camera are calculated through adopting bundle adjustment structure from motion and based on disorder matching feature points, and therefore, a sparse slope three-dimensional point cloud model can be obtained; the sparse slope three-dimensional point cloud model is processed through adopting a patch-based multi-view stereo vision algorithm, so that the sparse slope three-dimensional point cloud model can be diffused to a dense slope three-dimensional point cloud model; and the surface mesh of the slope is reconstructed through adopting Poisson reconstruction algorithm, and the texture information of the surface of the slop is mapped onto a mesh model, and therefore, a vivid three-dimensional slope model with high resolution can be constructed. The unmanned aerial vehicle aerial photography sequence image-based slope three-dimension reconstruction method of the invention has the advantages of low cost, flexibility, portability, high imaging resolution, short operating period, suitability for survey of high-risk areas and the like. With the method adopted, the application of low-altitude photogrammetry and computer vision technology to the geological engineering disaster prevention and reduction field can be greatly prompted.

Target object recognition and positioning method based on color images and depth images

ActiveCN106826815AImprove the efficiency of finding the target objectEffective reflection of the characteristicsProgramme-controlled manipulatorScene recognitionColor imageColor recognition
The invention relates to a target object recognition and positioning method based on color images and depth images. The method is characterized by comprising the following steps that (1), a target region is confirmed by a robot by the adoption of the remote HSV color recognition, the distance between the robot and the target region is obtained according to the RGB color images and the depth images, and the robot conducts navigation and path planning and moves to the portion near the target region; (2), when the robot reaches the portion near the target region, through the SURF feature point detection, the RGB feature information of the target object is obtained, feature matching is conducted on the RGB feature information and the pre-stored RGB feature information of the target object, and if the feature of the target object accords with an existing object model, the target object is positioned; and (3), the RGB color images are collected to an imaging plane, the two-dimensional coordinates of the target object in the imaging plane are obtained, and the relative distance between the target object and a camera is obtained through the depth images, so that the three-dimensional coordinates of the target object are obtained. By the adoption of the target object recognition and positioning method, the category of the object can be judged quickly, and the three-dimensional coordinates of the object can be determined quickly.

Fingerprint minutiae matching method syncretized to global information and system thereof

The invention provides a fingerprint minutiae matching method syncretized to global information and a system thereof. The system realizes the entire matching process by an image acquiring unit, an image pre-processing unit, a feature extracting unit, a template storing unit and a feature matching unit, which specifically comprises the following steps: from the feature extracting unit, extracting the feature including the globe information, that is, minutiae handedness, and regarding the minutiae handedness, minutiae information, and minutiae local direction description as the feature to represent the fingerprint; measuring the similarity between the minutiaes by the minutiae handedness and the minutiae local direction description; selecting several pairs of minutiaes having the greatest similarity as an initial dot pair; registering the fingerprint feature and obtaining the corresponding matching fractions with each group of initial dot pair as a reference; selecting the maximum matching fraction in matching fractions as the finial matching fraction; judging whether the input fingerprint feature and the template fingerprint feature are from the same finger based on the final matching fraction, thereby finishing the minutiae matching of the fingerprint.

Somatosensory-based natural interaction method for virtual mine

The invention discloses a somatosensory-based natural interaction method for a virtual mine. The method comprises the steps of applying a Kinect to acquire gesture signals, depth information and bone point information of a user; carrying out smoothing filtering on images, depth information and bone information of the gesture signals; dividing gesture images by using a depth histogram, applying an eight neighborhood outline tracking algorithm to find out a gesture outline, and identifying static gestures; planning feature matching identification of dynamic gestures by improving dynamic time according to the bone information; triggering corresponding Win32 instruction information by using a gesture identification result, and transmitting the information to a virtual reality engine, respectively mapping the instruction information to the primary keyboard mouse operation of a virtual mining natural interaction system, so as to realize the somatosensory interaction control of the virtual mine. According to the method provided by the invention, the natural efficiency of man-machine interaction can be improved, and the immersion and natural infection represented by the virtual mine can be improved, and the application of the virtual reality and a somatosensory interaction technology can be effectively popularized in coal mines and other fields.

Non-overlapping field-of-view camera gesture calibration method based on point cloud feature map registration

The invention discloses a non-overlapping field-of-view camera gesture calibration method based on point cloud feature map registration. The method comprises the following steps that: (1) carrying outbasic calibration on a plurality of cameras of a non-overlapping field of view to obtain an internal reference; (2) utilizing the plurality of cameras to carry out environment detection and synchronous positioning and mapping, constructing a point cloud map, and extracting a key frame to solve the pose matrix of the camera; (3) extracting an image frame from the key frame of one camera, carryingout similarity detection on the key frames of other cameras, constructing a matching frame point set and a matching point pair set, and carrying out projection error minimization on the projection ofthe point cloud map point on the image frame and the practical pixel coordinate; and (4) carrying out feature matching on a frame near the matched frame, blending all feature points, carrying out global optimization and iterative solution on a relative pose matrix, selecting a correction parameter according to a practical situation, and carrying out final gesture calibration on the camera. By useof the method, the problems of high calibration work intensity, low work efficiency and low accuracy of a traditional calibration method are solved.

Video image stabilizing method for space based platform hovering

A method for stabilizing an image during suspension of a video on an air-based platform comprises the steps as follows: first, selecting a frame image in a video series as a reference frame; extracting characteristic points of a reference image and a current image of the video series by using such a characteristic extraction method as invariant scale and feature transformation; preliminarily matching the characteristics by taking a euclidean space distance as a characteristic match evidence so as to form characteristic match point pairs; further selecting the characteristic match points according to invariability of relative positions of characteristic points of an image background and removing wrong matched characteristic match point pairs and the characteristic match point pairs positioned on a movement target; performing least square calculation by using the characteristic match point pairs in a six-parameter affine transformation module so as to obtain module parameters; and performing correction compensation to the current image so as to obtain stable video series output with fixed visual field. During the process, the invention also provides an idea of changing a new reference frame with an interval of a certain number of frames, thereby reducing errors and improving the stability accuracy; and the invention can be applied to traffic monitoring, target track and other fields and has wide market prospect and application value.

Improved closed-loop detection algorithm-based mobile robot vision SLAM (Simultaneous Location and Mapping) method

The present invention provides an improved closed-loop detection algorithm-based mobile robot vision SLAM (Simultaneous Location and Mapping) method. The method includes the following steps that: S1,Kinect is calibrated through a using the Zhang Dingyou calibration method; S2, ORB feature extraction is performed on acquired RGB images, and feature matching is performed by using the FLANN (Fast Library for Approximate Nearest network); S3, mismatches are deleted, the space coordinates of matching points are obtained, and inter-frame pose transformation (R, t) is estimated through adopting thePnP algorithm; S4, structureless iterative optimization is performed on the pose transformation solved by the PnP; and S5, the image frames are preprocessed, the images are described by using the bagof visual words, and an improved similarity score matching method is used to perform image matching so as to obtain closed-loop candidates, and correct closed-loops are selected; and S6, an image optimization method centering cluster adjustment is used to optimize poses and road signs, and more accurate camera poses and road signs are obtained through continuous iterative optimization. With the method of the invention adopted, more accurate pose estimations and better three-dimensional reconstruction effects under indoor environments can be obtained.
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products