Patents
Literature
Eureka-AI is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Eureka AI

615 results about "Feature point matching" patented technology

Point feature matching. In image processing, point feature matching is an effective method to detect a specified target in a cluttered scene. This method detects single objects rather than multiple objects.

Visual ranging-based simultaneous localization and map construction method

The invention provides a visual ranging-based simultaneous localization and map construction method. The method includes the following steps that: a binocular image is acquired and corrected, so that a distortion-free binocular image can be obtained; feature extraction is performed on the distortion-free binocular image, so that feature point descriptors can be generated; feature point matching relations of the binocular image are established; the horizontal parallax of matching feature points is obtained according to the matching relations, and based on the parameters of a binocular image capture system, real space depth is calculated; the matching results of the feature points of a current frame and feature points in a world map are calculated; feature points which are wrongly matched with each other are removed, so that feature points which are successfully matched with each other can be obtained; a transform matrix of the coordinates of the feature points which are successfully matched with each other under a world coordinate system and the three-dimension coordinates of the feature points which are successfully matched with each other under a current reference coordinate system is calculated, and a pose change estimated value of the binocular image capture system relative to an initial position is obtained according to the transform matrix; and the world map is established and updated. The visual ranging-based simultaneous localization and map construction method of the invention has low computational complexity, centimeter-level positioning accuracy and unbiased characteristics of position estimation.
Owner:北京超星未来科技有限公司

Remote obstacle detection method based on laser radar multi-frame point cloud fusion

ActiveCN110221603ASolve the problem of inability to effectively detect long-distance obstaclesPrecision FusionElectromagnetic wave reradiationPosition/course control in two dimensionsPoint cloudMultiple frame
The invention discloses a remote obstacle detection method based on laser radar multi-frame point cloud fusion. A local coordinate system and a world coordinate system are established, an extraction feature point of each laser point is calculated on an annular scanning line of the laser radar according to the original point cloud data under the local coordinate system, and the global pose of the current position relative to the initial position and the de-distortion point cloud in the world coordinate system are obtained through inter-frame feature point matching and map feature point matching; the de-distortion point clouds of the current frame and the previous frame are fused to obtain more compact de-distortion point cloud data, which is unified to the local coordinate system, then projection is performed on two-dimensional grids, and an obstacle is screened according to the height change features of each two-dimensional grid. According to the method in the invention, the problem that the detection rate of the remote barrier caused by sparse laser point clouds is low is solved, the remote barriers can be effectively detected, the error detection rate and the leak detection rateare low, and the system cost can be greatly reduced.
Owner:ZHEJIANG UNIV

Remote sensing image registration method of multi-source sensor

The invention provides a remote sensing image registration method of a multi-source sensor, relating to an image processing technology. The remote sensing image registration method comprises the following steps of: respectively carrying out scale-invariant feature transform (SIFT) on a reference image and a registration image, extracting feature points, calculating the nearest Euclidean distances and the nearer Euclidean distances of the feature points in the image to be registered and the reference image, and screening an optimal matching point pair according to a ratio; rejecting error registration points through a random consistency sampling algorithm, and screening an original registration point pair; calculating distribution quality parameters of feature point pairs and selecting effective control point parts with uniform distribution according to a feature point weight coefficient; searching an optimal registration point in control points of the image to be registered according to a mutual information assimilation judging criteria, thus obtaining an optimal registration point pair of the control points; and acquiring a geometric deformation parameter of the image to be registered by polynomial parameter transformation, thus realizing the accurate registration of the image to be registered and the reference image. The remote sensing image registration method provided by the invention has the advantages of high calculation speed and high registration precision, and can meet the registration requirements of a multi-sensor, multi-temporal and multi-view remote sensing image.
Owner:济钢防务技术有限公司

Dynamic target position and attitude measurement method based on monocular vision at tail end of mechanical arm

The invention relates to a dynamic target position and attitude measurement method based on monocular vision at the tail end of a mechanical arm and belongs to the field of vision measurement. The method comprises the following steps: firstly calibrating with a video camera and calibrating with hands and eyes; then shooting two pictures with the video camera, extracting spatial feature points in target areas in the pictures by utilizing a scale-invariant feature extraction method and matching the feature points; resolving a fundamental matrix between the two pictures by utilizing an epipolar geometry constraint method to obtain an essential matrix, and further resolving a rotation transformation matrix and a displacement transformation matrix of the video camera; then performing three-dimensional reconstruction and scale correction on the feature points; and finally constructing a target coordinate system by utilizing the feature points after reconstruction so as to obtain the position and the attitude of a target relative to the video camera. According to the method provided by the invention, the monocular vision is adopted, the calculation process is simplified, the calibration with the hands and the eyes is used, and the elimination of error solutions in the measurement process of the position and the attitude of the video camera can be simplified. The method is suitable for measuring the relative positions and attitudes of stationery targets and low-dynamic targets.
Owner:TSINGHUA UNIV

Binocular camera-based panoramic image splicing method

The invention provides a binocular camera-based panoramic image splicing method. According to the method, a binocular camera is arranged at a certain point of view in the space, the binocular camera completes photographing for once and obtains two fisheye images; a traditional algorithm is improved according to the defect of insufficient distortion correction capacity of a latitude-longitude correction method in a horizontal direction; corrected images are projected into the same coordinate system through using a spherical surface orthographic projection method, so that the fast correction of the fisheye images can be realized; feature points in an overlapping area of the two projected images are extracted based on an SIFT feature point detection method; the search strategy of a K-D tree is adopted to search Euclidean nearest neighbor distances of the feature points, so that feature point matching can be performed; an RANSAC (random sample consensus) algorithm is used to perform de-noising on the feature points and eliminate mismatching points, so that image splicing can be completed; and a linear fusion method is adopted to fuse spliced images, and therefore, color and scene change bluntness in an image transition area can be avoided.
Owner:深圳市优象计算技术有限公司

Method and device for detecting road traffic abnormal events in real time

The invention provides a method and device for detecting road traffic abnormal events in real time. The method includes the steps of monitoring a road, obtaining a plurality of frames of continuous monitor images, extracting bright white segments from the monitor images, obtaining lane lines and lane end points through processing, building a lane model, determining a bidirectional detection area of a lane according to the lane model, detecting a moving object in the bidirectional detection area according to a Gaussian mixture model background subtraction method, determining the position of the moving object, building the mapping relation between the moving target and an actual vehicle according to the position of the moving target in the multiple frames of continuous monitor images by the adoption of a posterior probability splitting and merging algorithm and a feature point matching and tracking method, obtaining the running track and running speed of the actual vehicle, detecting the lane model and the running track and running speed of the actual vehicle according to a prestored road traffic abnormal behavior semantic model, and judging whether the road traffic abnormal events exist or not. The method has the advantages of being intelligent, high in accuracy and the like.
Owner:TSINGHUA UNIV +1

Quick low altitude remote sensing image automatic matching and airborne triangulation method

The utility model relates to a method of rapid automatic matching for the low altitude remote sensing image and the aerial triangulation, which is characterized in that: capture serial images with the low altitude remote platform; extract the characteristic point from the image with the feature extraction technology; save automatically all extracted characteristic points of the images; match automatically characteristic points with same name of the adjacent images and transmit automatically the matched characteristic points with same names to all superimposed image to obtain a large quantity of characteristic points with three degree or more and same names; the semi-automatic measurement control point and the checkpoint of the image coordinate can combine with other observation values of non-photogrammetry to carry out the high precision aerial triangulation and the precision evaluation of the balancing results. The utility model has the advantages that the stable and reliable matching results and higher precision of the aerial triangulation can be obtained even the low altitude remote sensing images have large rotation deviation angle and the applicative demand of the large scale survey and the high precision three-dimensional reconstruction can be met.
Owner:WUHAN UNIV

Robot semantic SLAM method based on object instance matching, processor and robot

The invention provides a robot semantic SLAM method based on object instance matching, a processor and a robot. The robot semantic SLAM method comprises the steps that acquring an image sequence shotin the operation process of a robot, and conducting feature point extraction, matching and tracking on each frame of image to estimate camera motion; extracting a key frame, performing instance segmentation on the key frame, and obtaining all object instances in each frame of key frame; carrying out feature point extraction on the key frame and calculating feature point descriptors, carrying outfeature extraction and coding on all object instances in the key frame to calculate feature description vectors of the instances, and obtaining instance three-dimensional point clouds at the same time; carrying out feature point matching and instance matching on the feature points and the object instances between the adjacent key frames; and performing local nonlinear optimization on the pose estimation result of the SLAM by fusing the feature point matching and the instance matching to obtain a key frame carrying object instance semantic annotation information, and mapping the key frame intothe instance three-dimensional point cloud to construct a three-dimensional semantic map.
Owner:SHANDONG UNIV

Target automatically recognizing and tracking method based on affine invariant point and optical flow calculation

The invention discloses a target automatically recognizing and tracking method based on affine invariant points and optical flow calculation, which comprises the following steps: firstly, carrying out image pretreatment on a target image and video frames and extracting affine invariant feature points; then, carrying out feature point matching, eliminating mismatching points; determining the target recognition success when the feature point matching pairs reach certain number and affine conversion matrixes can be generated; then, utilizing the affine invariant points collected in the former step for feature optical flow calculation to realize the real-time target tracking; and immediately returning to the first step for carrying out the target recognition again if the tracking of middle targets fails. The feature point operator used by the invention belongs to an image local feature description operator which is based on the metric space and maintains the unchanged image zooming and rotation or even affine conversion. In addition, the adopted optical flow calculation method has the advantages of small calculation amount and high accuracy, and can realize the real-time tracking. The invention is widely applied to the fields of video monitoring, image searching, computer aided driving systems, robots and the like.
Owner:NANJING UNIV OF SCI & TECH

Method for tracking gestures and actions of human face

The invention discloses a method for tracking gestures and actions of a human face, which comprises steps as follows: a step S1 includes that frame-by-frame images are extracted from a video streaming, human face detection is carried out for a first frame of image of an input video or when tracking is failed, and a human face surrounding frame is obtained, a step S2 includes that after convergent iteration of a previous frame of image, more remarkable feature points of textural features of a human face area of the previous frame of image match with corresponding feather points found in a current frame of image during normal tracking, and matching results of the feather points are obtained, a step S3 includes that the shape of an active appearance model is initialized according to the human face surrounding frame or the feature point matching results, and an initial value of the shape of a human face in the current frame of image is obtained, and a step S4 includes that the active appearance model is fit by a reversal synthesis algorithm, so that human face three-dimensional gestures and face action parameters are obtained. By the aid of the method, online tracking can be completed full-automatically in real time under the condition of common illumination.
Owner:INST OF AUTOMATION CHINESE ACAD OF SCI

Uncalibrated multi-viewpoint image correction method for parallel camera array

InactiveCN102065313AFreely adjust horizontal parallaxIncrease the use range of multi-look correctionImage analysisSteroscopic systemsParallaxScale-invariant feature transform
The invention relates to an uncalibrated multi-viewpoint image correction method for parallel camera array. The method comprises the steps of: at first, extracting a set of characteristic points in viewpoint images and determining matching point pairs of every two adjacent images; then introducing RANSAC (Random Sample Consensus) algorithm to enhance the matching precision of SIFT (Scale Invariant Feature Transform) characteristic points, and providing a blocking characteristic extraction method to take the fined positional information of the characteristic points as the input in the subsequent correction processes so as to calculate a correction matrix of uncalibrated stereoscopic image pairs; then projecting a plurality of non-coplanar correction planes onto the same common correction plane and calculating the horizontal distance between the adjacent viewpoints on the common correction plane; and finally, adjusting the positions of the viewpoints horizontally until parallaxes are uniform, namely completing the correction. The composite stereoscopic image after the multi-viewpoint uncalibrated correction of the invention has quite strong sense of width and breadth, prominently enhanced stereoscopic effect compared with the image before the correction, and can be applied to front-end signal processing of a great many of 3DTV application devices.
Owner:SHANGHAI UNIV

Real-time three-dimensional scene reconstruction method for UAV based on EG-SLAM

The present invention provides a real-time three-dimensional scene reconstruction method for a UAV (unmanned aerial vehicle) based on the EG-SLAM. The method is characterized in that: visual information is acquired by using an unmanned aerial camera to reconstruct a large-scale three-dimensional scene with texture details. Compared with multiple existing methods, by using the method provided by the present invention, images are collected to directly run on the CPU, and positioning and reconstructing a three-dimensional map can be quickly implemented in real time; rather than using the conventional PNP method to solve the pose of the UAV, the EG-SLAM method of the present invention is used to solve the pose of the UAV, namely, the feature point matching relationship between two frames is used to directly solve the pose, so that the requirement for the repetition rate of the collected images is reduced; and in addition, the large amount of obtained environmental information can make theUAV to have a more sophisticated and meticulous perception of the environment structure, texture rendering is performed on the large-scale three-dimensional point cloud map generated in real time, reconstruction of a large-scale three-dimensional map is realized, and a more intuitive and realistic three-dimensional scene is obtained.
Owner:NORTHWESTERN POLYTECHNICAL UNIV

Method for creating 3D map based on 3D laser

ActiveCN108320329AHigh precisionAvoid the problem of inconvenient filtering and denoisingImage enhancementDetails involving processing stepsInformation processingVoxel
The invention discloses a method for creating a 3D map based on a 3D laser, and belongs to the technical field of data processing. The method comprises the steps of performing rich information processing on point cloud data obtained by a 3D laser sensor to obtain an ordered laser point cloud data frame; performing feature extraction on the point cloud data frame to obtain feature points of the point cloud data frame; optimizing a transformation matrix according to an LM algorithm, and enabling the transformation matrix capable of enabling the sum of squares of distance errors of the matching of all feature points to be the minimum to serve the pose of radar; transforming each frame of the point cloud data into a point cloud map under a global coordinate system according to the pose of theradar, and transforming the point cloud map into a map expressed by voxels. According to the invention, rich information processing is performed on the original laser data, thereby providing a data basis for the creation of the 3D map. The method avoids a problem that it is inconvenient to perform filtering and denoising by adopting a point cloud map through adopting an expression mode of the voxel map, and the definition of the 3D map is improved.
Owner:维坤智能科技(上海)有限公司 +1

Method for automatically stitching unmanned aerial vehicle remote sensing images based on flight control information

The invention provides a method for automatically stitching unmanned aerial vehicle remote sensing images based on flight control information, which in particular realizes automatic correction and stitching of the unmanned aerial vehicle remote sensing images according to attitude parameters acquired by a flight control system. The method comprises the following steps of: correcting aircraft yaws of the images and determining adjacency relation of the images according to the attitude parameters acquired by an unmanned aerial vehicle flight control system; extracting characteristic points from the corrected images, matching the characteristic points with those of adjacent images and extracting identical points; calculating a range of the output images according to an image calculating model, comparing the range with the range determined by the attitude parameters, and if the difference is within a tolerance range, considering that the number and the quality of the identical points between the adjacent images meet the stitching requirement and the images are connected; sequentially calculating connection relation among all images and solving the maximum connection component among the images; and determining a transformation parameter by performing model calculation according to the connection component and outputting a stitched image of a research region.
Owner:REMOTE SENSING APPLIED INST CHINESE ACAD OF SCI

Vehicle self-positioning method based on street view image database

The invention discloses a vehicle self-positioning method based on a street view image database. The vehicle self-positioning method comprises the following steps of 1, collecting view images by a camera, and extracting main color feature vector information, SURF feature points and position information of the collected images, and storing the extracted information in the database; 2, taking the shot images in a vehicle driving process as to-be-matched images, extracting main color feature vectors of the to-be-matched images, obtaining an initial matched image by calculating the similarity of the main color feature vector of the to-be-matched images and the main color feature vector of the images in the original database, extracting the position information of the initial matched image, and preliminarily determining the position of the vehicle; and 3, extracting adjacent-region images of the initial matched image, forming a searching space, performing feature point matching on the to-be-matched images and the images in the searching space to obtain an optimal matched image, extracting the shooting position coordinate of the optimal matched image and position coordinates of other eight adjacent regions, calculating the weight of each coordinate, and then calculating the accurate coordinate of the vehicle position through a formula.
Owner:CHANGAN UNIV

Finger vein recognition and safety authentication method, terminal and system

The invention discloses a finger vein recognition and safety authentication method, a terminal and a system. The finger vein recognition and safety authentication method comprises a collection step, that is, information of a vein image of the finger part is collected, and encryption is performed on the vein image; an image processing step; a feature extraction step, that is, detail point feature extraction and storage are performed on the vein image conforming to quality evaluation requirements; and feature point matching step, that is, stored vein image detail point features are taken, comparative analysis is performed on the stored vein image detail point features and a vein image to be detected, detail point position and angle information is compared, a matching operation is completed,and a result is outputted. Meanwhile, the invention further comprises a finger vein recognition and safety authentication terminal and a system. According to the invention, a series of image processing and feature extracting operations are performed on the collected finger vein image, so that the finally recognized vein image is enabled to be accurate and clear, and the accuracy and the response speed of the whole finger vein recognition are effectively enhanced. Meanwhile, encryption processing is performed on the vein image, so that the confidentiality of the whole recognition system is improved, and the system is enabled to be safer and more reliable.
Owner:SHENZHEN CASTLE SECURITY TECH CO LTD

Sea cucumber detection and binocular visual positioning method based on deep learning

The invention provides a sea cucumber detection and binocular visual positioning method based on deep learning, and is suitable for a submarine sea cucumber fishing task of an underwater robot of ocean pasture. The method mainly comprises the following steps of calibrating binocular cameras to obtain internal and external parameters of the cameras; correcting the binocular cameras, so that imagingorigin coordinates of left and right views are consistent, optical axes of the two cameras are parallel, left and right imaging planes are coplanar, and bipolar lines are aligned; performing submarine image data collection by utilizing the calibrated binocular cameras; performing image enhancement on the collected image data through a dark channel priority algorithm based on white balance compensation; performing deep learning-based sea cucumber target detection on a submarine image subjected to the image enhancement; and performing a binocular stereo feature point matching algorithm on the image which is subjected to the image enhancement and the deep learning to obtain two-dimensional regression frame information of a target, thereby obtaining three-dimensional positioning coordinate information of the target. According to the method, accurate positioning of underwater sea cucumber treasures can be realized, and manual participation is not needed.
Owner:HARBIN ENG UNIV

Remote sensing image registration method based on anisotropic gradient dimension space

The invention discloses a remote sensing image registration method based on anisotropic gradient dimension space, which mainly solves the problem of relatively low correct matching rate under the condition of relatively great brightness nonlinear change of the remote sensing images. The implementing steps of the remote sensing image registration method based on anisotropic gradient dimension space are as follows: (1) inputting remote sensing image pairs; (2) constructing dimension space of anisotropic diffusion; (3) calculating a gradient amplitude image; (4) detecting feature points; (5) generating a main direction of the feature points; (6) generating a descriptor of each feature point; (7) matching the feature points; (8) deleting wrongly matched feature point pairs; and (9) registering a reference image and a to-be-registered image. As feature point detection, feature point main direction generation and feature point descriptor generation are carried out on the gradient amplitude image in the anisotropic dimension space, the situation of relatively great brightness nonlinear change of the images can be dealt efficiently, and the remote sensing image registration method based on anisotropic gradient dimension space can be applied to complex multisource and multispectral remote sensing image registration.
Owner:XIDIAN UNIV

Binocular vision positioning method for target grabbing of underwater robot

The invention relates to a binocular vision positioning method for target grabbing of an underwater robot, and belongs to the field of computer vision. The method is mainly used for accurately acquiring three-dimensional information of a grabbed target when an underwater robot works. The method comprises the following steps: double-target positioning: calculating internal and external parameters of left and right cameras; target detection: positioning a target object detection frame; binocular image correction: carrying out distortion correction and stereo correction, and determining a right image target area; binocular image stereo matching: extracting image feature points, describing the feature points, performing stereo matching, and removing mismatching; and calculating the three-dimensional information of the target in the image under the left camera coordinate. According to the method, the accurate parallax value is obtained by extracting feature points, removing unstable featurepoints through non-maximum suppression, constructing a binary descriptor, matching the feature points and removing mismatching. Through the scheme, the binocular stereo matching robustness can be improved, and meanwhile, the three-dimensional information of the detection target can be accurately obtained, so that the real-time positioning requirement on the target when the underwater robot grabsthe target is met.
Owner:HARBIN ENG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products