Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

732 results about "Point match" patented technology

Visual ranging-based simultaneous localization and map construction method

The invention provides a visual ranging-based simultaneous localization and map construction method. The method includes the following steps that: a binocular image is acquired and corrected, so that a distortion-free binocular image can be obtained; feature extraction is performed on the distortion-free binocular image, so that feature point descriptors can be generated; feature point matching relations of the binocular image are established; the horizontal parallax of matching feature points is obtained according to the matching relations, and based on the parameters of a binocular image capture system, real space depth is calculated; the matching results of the feature points of a current frame and feature points in a world map are calculated; feature points which are wrongly matched with each other are removed, so that feature points which are successfully matched with each other can be obtained; a transform matrix of the coordinates of the feature points which are successfully matched with each other under a world coordinate system and the three-dimension coordinates of the feature points which are successfully matched with each other under a current reference coordinate system is calculated, and a pose change estimated value of the binocular image capture system relative to an initial position is obtained according to the transform matrix; and the world map is established and updated. The visual ranging-based simultaneous localization and map construction method of the invention has low computational complexity, centimeter-level positioning accuracy and unbiased characteristics of position estimation.
Owner:北京超星未来科技有限公司

Precise registration method of ground laser-point clouds and unmanned aerial vehicle image reconstruction point clouds

InactiveCN103426165ASolve the problem of multi-angle observationReduce complexityImage analysis3D modellingPoint cloudTransformation parameter
The invention relates to a precise registration method of ground laser-point clouds (ground base) and unmanned aerial vehicle image reconstruction point clouds (aerial base). The method comprises generating overlapping areas of the ground laser-point clouds and the unmanned aerial vehicle image reconstruction point clouds on the basis of image three-dimensional reconstruction and point cloud rough registration; then traversing ground base images in the overlapping areas, extracting ground base image feature points through a feature point extraction algorithm, searching for aerial base point clouds in the neighborhood range of the ground base point clouds corresponding to the feature points, and obtaining the aerial base image feature points matched with the aerial base point clouds to establish same-name feature point sets; according to the extracted same-name feature point sets of the ground base images and the aerial base images and a transformation relation between coordinate systems, estimating out a coordinate transformation matrix of the two point clouds to achieve precise registration. According to the precise registration method of the ground laser-point clouds and the unmanned aerial vehicle image reconstruction point clouds, by extracting the same-name feature points of the images corresponding to the ground laser-point clouds and the images corresponding to the unmanned aerial vehicle images, the transformation parameters of the two point cloud data can be obtained indirectly to accordingly improve the precision and the reliability of point cloud registration.
Owner:吴立新 +1

Unmanned aerial vehicle three-dimensional map construction method and device, computer equipment and storage medium

The invention relates to an unmanned aerial vehicle three-dimensional map construction method. The method comprises the following steps of obtaining a video frame image shot by a camera, extracting feature points in each video frame image; matching the feature points by adopting a color histogram and scale invariant feature transformation hybrid matching algorithm to obtain feature point matchingpairs; calculating according to the feature point matching pairs to obtain a pose transformation matrix; determining a three-dimensional coordinate corresponding to each video frame image according tothe pose transformation matrix, and converting the three-dimensional coordinates of the feature points in the video frame image into a world coordinate system to obtain a three-dimensional point cloud map, taking the video frame image as the input of a target detection model to obtain target object information, and combining the three-dimensional point cloud map with the target object informationto obtain the three-dimensional point cloud map containing the target object information. According to the method, the real-time performance and accuracy of three-dimensional point cloud map construction are improved, and rich information is contained. In addition, the invention further provides an unmanned aerial vehicle three-dimensional map construction device, computer equipment and a storagemedium.
Owner:SHENZHEN INST OF ADVANCED TECH CHINESE ACAD OF SCI

Real-time three-dimensional scene reconstruction method for UAV based on EG-SLAM

ActiveCN108648270ARequirements for Reducing Repetition RatesImprove realismImage enhancementImage analysisPoint cloudTexture rendering
The present invention provides a real-time three-dimensional scene reconstruction method for a UAV (unmanned aerial vehicle) based on the EG-SLAM. The method is characterized in that: visual information is acquired by using an unmanned aerial camera to reconstruct a large-scale three-dimensional scene with texture details. Compared with multiple existing methods, by using the method provided by the present invention, images are collected to directly run on the CPU, and positioning and reconstructing a three-dimensional map can be quickly implemented in real time; rather than using the conventional PNP method to solve the pose of the UAV, the EG-SLAM method of the present invention is used to solve the pose of the UAV, namely, the feature point matching relationship between two frames is used to directly solve the pose, so that the requirement for the repetition rate of the collected images is reduced; and in addition, the large amount of obtained environmental information can make theUAV to have a more sophisticated and meticulous perception of the environment structure, texture rendering is performed on the large-scale three-dimensional point cloud map generated in real time, reconstruction of a large-scale three-dimensional map is realized, and a more intuitive and realistic three-dimensional scene is obtained.
Owner:NORTHWESTERN POLYTECHNICAL UNIV

Underwater carrier geomagnetic anomaly feature points matching navigation method

An underwater carrier geomagnetic anomaly feature points matching navigation method belongs to the technical field of underwater navigation and solves the problem in the prior art that the location of an underwater carrier can not be determined according to geomagnetic field information. The method provided by the invention comprises the following steps of: acquiring a target magnetic moment vector of present position of the underwater carrier and a relative position vector from the present position of the underwater carrier to a target magnetic source; constructing a map of the underwater target magnetic source; carrying out coordinate transformation based on the absolute position of the underwater carrier so as to obtain geographic coordinates of the map; calculating the position of the underwater carrier in the map at sampling time and the geographic coordinates of the underwater carrier at the sampling time; updating the position of the target magnetic source; updating the map of the underwater target magnetic source; and repeating the above relative processes to complete the matching navigation of the underwater carrier. The invention is suitable for underwater carrier navigation.
Owner:NORTHEAST FORESTRY UNIVERSITY

Online deep learning SLAM based image cloud computing method and system

ActiveCN108921893AReal-time update and feedbackReduce training timeImage enhancementImage analysisData setKey frame
The invention discloses an online deep learning SLAM based image cloud computing method. The image cloud computing method comprises the following steps: acquiring image data and storing the image data; extracting a key frame and uploading the key frame; using the image data to construct a data set and training the data set to obtain optimal convolutional neural network parameters; extracting real-time image feature points and performing recognition, and performing feature point matching on adjacent frame images; iterating the image feature points to obtain the best matching transformation matrix, performing correction by using position and pose information, and obtaining the camera pose transformation; obtaining the optimal pose estimation through registration of point cloud data and the position and pose information; transforming the pose information into a coordinate system through matrix transformation, and obtaining map information; repeating the previous steps in regions with insufficient precision; and allowing a client to display the result and performing online adjustment at the same time. The invention parallelizes image processing, deep learning training and SLAM by usingthe cloud computing technology to improve the efficiency and accuracy of image processing, positioning and mapping.
Owner:SOUTH CHINA UNIV OF TECH

Vehicle self-positioning method based on street view image database

The invention discloses a vehicle self-positioning method based on a street view image database. The vehicle self-positioning method comprises the following steps of 1, collecting view images by a camera, and extracting main color feature vector information, SURF feature points and position information of the collected images, and storing the extracted information in the database; 2, taking the shot images in a vehicle driving process as to-be-matched images, extracting main color feature vectors of the to-be-matched images, obtaining an initial matched image by calculating the similarity of the main color feature vector of the to-be-matched images and the main color feature vector of the images in the original database, extracting the position information of the initial matched image, and preliminarily determining the position of the vehicle; and 3, extracting adjacent-region images of the initial matched image, forming a searching space, performing feature point matching on the to-be-matched images and the images in the searching space to obtain an optimal matched image, extracting the shooting position coordinate of the optimal matched image and position coordinates of other eight adjacent regions, calculating the weight of each coordinate, and then calculating the accurate coordinate of the vehicle position through a formula.
Owner:CHANGAN UNIV

A method and system for realizing a visual SLAM semantic mapping function based on a cavity convolutional deep neural network

The invention relates to a method for realizing a visual SLAM semantic mapping function based on a cavity convolutional deep neural network. The method comprises the following steps of (1) using an embedded development processor to obtain the color information and the depth information of the current environment via a RGB-D camera; (2) obtaining a feature point matching pair through the collectedimage, carrying out pose estimation, and obtaining scene space point cloud data; (3) carrying out pixel-level semantic segmentation on the image by utilizing deep learning, and enabling spatial pointsto have semantic annotation information through mapping of an image coordinate system and a world coordinate system; (4) eliminating the errors caused by optimized semantic segmentation through manifold clustering; and (5) performing semantic mapping, and splicing the spatial point clouds to obtain a point cloud semantic map composed of dense discrete points. The invention also relates to a system for realizing the visual SLAM semantic mapping function based on the cavity convolutional deep neural network. With the adoption of the method and the system, the spatial network map has higher-level semantic information and better meets the use requirements in the real-time mapping process.
Owner:EAST CHINA UNIV OF SCI & TECH

Secondary classification fusion identification method for fingerprint and finger vein bimodal identification

The invention provides a secondary classification fusion identification method for fingerprint and finger vein bimodal identification. A fingerprint module and a vein module are used as primary classifiers, and a secondary decision module is used as a secondary classifier. The method comprises the following steps of: reading a fingerprint image and a vein image through the fingerprint module and the vein module; pre-processing the read images respectively and extracting characteristic point sets of the both; performing identification on the images respectively to obtain respective identification results, wherein the fingerprint identification adopts a detail point match-based method, and the vein identification uses an improved Hausdorff distance mode to perform identification; forming a new characteristic vector by using the extracted fingerprint and vein characteristic point sets in a characteristic series mode through the secondary decision module so as to form the secondary classifier and obtain an identification result; and finally, performing decision-level fusion on the three identification results. The method has the advantages of making full use of identification information of fingerprints and finger veins, and effectively improving the accuracy of an identification system, along with high identification rate.
Owner:HARBIN ENG UNIV

Sea cucumber detection and binocular visual positioning method based on deep learning

The invention provides a sea cucumber detection and binocular visual positioning method based on deep learning, and is suitable for a submarine sea cucumber fishing task of an underwater robot of ocean pasture. The method mainly comprises the following steps of calibrating binocular cameras to obtain internal and external parameters of the cameras; correcting the binocular cameras, so that imagingorigin coordinates of left and right views are consistent, optical axes of the two cameras are parallel, left and right imaging planes are coplanar, and bipolar lines are aligned; performing submarine image data collection by utilizing the calibrated binocular cameras; performing image enhancement on the collected image data through a dark channel priority algorithm based on white balance compensation; performing deep learning-based sea cucumber target detection on a submarine image subjected to the image enhancement; and performing a binocular stereo feature point matching algorithm on the image which is subjected to the image enhancement and the deep learning to obtain two-dimensional regression frame information of a target, thereby obtaining three-dimensional positioning coordinate information of the target. According to the method, accurate positioning of underwater sea cucumber treasures can be realized, and manual participation is not needed.
Owner:HARBIN ENG UNIV

Three-dimensional real scene collection and modeling method and apparatus, and readable storage medium

The invention provides a three-dimensional real scene collection and modeling method and apparatus, and a readable storage medium. A laser scanning part of a three-dimensional real scene collection apparatus is used for collecting a laser point cloud of a target in real time; a panoramic shooting part is used for collecting a real panoramic image of the target; and point cloud data and panoramic image data are synchronously uploaded to a three-dimensional real scene modeling system for performing storage and processing. The laser point cloud is subjected to homonymy point matching calculationto build an overall three-dimensional model; the panoramic image is subjected to splicing integration processing; a processed three-dimensional model and the panoramic image are subjected to fusion processing to finish three-dimensional real scene modeling; and a three-dimensional real scene model is pushed to a three-dimensional real scene application system for performing application of variousthree-dimensional real scenes through the three-dimensional real scene modeling system. Therefore, the collection and modeling processing speed is greatly increased; in emergency, the data timelinesscan be ensured; the modeling efficiency is improved; and the modeling cycle is shortened.
Owner:上海激点信息科技有限公司

Heterology remote sensing image registration method

The invention discloses a heterology remote sensing image registration method. According to the core idea, multi-scale matching is taken as a basis; straight line intersection points are used as elements; the point matching method of a joint Voronoi map and a spectrogram is used; iteration feature extraction and a matching policy are integrated; and the problems of heavy dependence on feature extraction, poor reliability, low accuracy and the like of the existing method are overcome. The method comprises the steps that multi-scale analysis is carried out on original images; straight line extraction and intersection point acquiring are carried out on the coarsest scale; the point matching method of the joint Voronoi map and the spectrogram is carried out on intersection point sets to acquire a homonymous point pair; whether a matching result is qualified is detected: if the matching result is qualified, going to the next step is carried out, otherwise self-adaptive parameter adjustment is carried out and straight line extraction and point set matching are carried out again; original transformation is carried out on the images to be registered, and straight line features are respectively extracted; homonymous straight line segments are searched, and a candidate homonymous point pair is acquired; a KNN map is used to acquire an accurate matched point pair; and a transformation parameter is solved. The method provided by the invention is mainly used for the registration of visible light, infrared, synthetic aperture radar (SAR) and other heterology remote sensing images.
Owner:WUHAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products