Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

515 results about "Crucial point" patented technology

Registering control point extracting method combining multi-scale SIFT and area invariant moment features

InactiveCN101714254AMake up for defects that are susceptible to factors such as noiseImage analysisFeature vectorImaging processing
The invention discloses a registering control point extracting method combining multi-scale SIFT and area invariant moment features, relating to the field of image processing. The invention solves the technical problems of how to extract stable and reliable feature points in the image registering process. The method comprises the following steps of: firstly, carrying out continuous filtering on images by utilizing Gauss kernel functions to generate the DOG scale-space by combining with a downsampling method, and seeking and calculating space and scale coordinates of a local extremum. Then, forming the feature vectors of a key point by utilizing directional gradient information, and obtaining an originally matching key point pair through the Euclidean distance; and then calculating local area HU invariant moment features by taking the originally selected key point as the center, and screening out a finally accurate and effective registering control point by combining with the Euclidean distance. The method combines the multi-scale features of an SIFT arithmetic and the image local area grayscale invariant moment features, thereby effectively improving the stability and the reliability of extracting multisensor image registering control point pairs.
Owner:HARBIN INST OF TECH

Method for segmenting and indexing scenes by combining captions and video image information

The invention relates to a method for segmenting and indexing scenes by combining captions and video image information. The method is characterized in that: in the duration of each piece of caption, a video frame collection is used as a minimum unit of a scene cluster. The method comprises the steps of: after obtaining the minimum unit of the scene cluster, and extracting at least three or more discontinuous video frames to form a video key frame collection of the piece of caption; comparing the similarities of the key frames of a plurality of adjacent minimum units by using a bidirectional SIFT key point matching method and establishing an initial attribution relationship between the captions and the scenes by combining a caption related transition diagram; for the continuous minimum cluster units judged to be dissimilar, further judging whether the minimum cluster units can be merged by the relationship of the minimum cluster units and the corresponding captions; and according to the determined attribution relationships of the captions and the scenes, extracting the video scenes. For the segments of the extracted video scenes, the forward and reverse indexes, generated by the caption texts contained in the segments, are used as a foundation of indexing the video segments.
Owner:INST OF ACOUSTICS CHINESE ACAD OF SCI

3D point cloud FPFH characteristic-based real-time three dimensional space positioning method

The present invention relates to a 3D point cloud FPFH characteristic-based real-time three dimensional space positioning method. The method comprises a step 1) of obtaining the 3D point cloud data from a depth camera; a step 2) of selecting the point cloud key frames; 3) a point cloud pre-processing step; 4) a characteristic description step of using an ISS algorithm to obtain the point cloud key points and obtaining the FPFH characteristics of the key points; 5) a point cloud registration step of firstly utilizing a sampling consistency initial registration algorithm to carry out the FPFH characteristic-based initial registration on two point clouds, and then using an ICP algorithm to carry out the secondary registration on an initial registration result; 6) a coordinate transformation step of obtaining a change matrix of the three dimensional space coordinates of a mobile robot, and transforming the coordinate of the current point cloud into an initial position via a transformation matrix; a step 7) of repeating the steps 1) to 6), and calculating the coordinate of the robot relative to the initial position along with the movement of the robot. The method of the present invention has a better accuracy for the real-time positioning of the mobile robot on a bad illumination or completely dark condition.
Owner:ZHEJIANG UNIV OF TECH

Scattered workpiece recognition and positioning method based on point cloud processing

InactiveCN108830902AAchieve a unique descriptionReduce the probability of falling into a local optimumImage enhancementImage analysisLocal optimumPattern recognition
The invention discloses a scattered workpiece recognition and positioning method based on point cloud processing, and the method is used for solving a problem of posture estimation of scattered workpeics in a random box grabbing process. The method comprises two parts: offline template library building and online feature registration. A template point cloud data set and a scene point cloud are obtained through a 3D point cloud obtaining system. The feature information, extracted in an offline state, of a template point cloud can be used for the preprocessing, segmentation and registration of the scene point cloud, thereby improving the operation speed of an algorithm. The point cloud registration is divided into two stages: initial registration and precise registration. A feature descriptor which integrates the geometrical characteristics and statistical characteristics is proposed at the stage of initial registration, thereby achieving the uniqueness description of the features of a key point. Points which are the most similar to the feature description of feature points are searched from a template library as corresponding points, thereby obtaining a corresponding point set, andachieving the calculation of an initial conversion matrix. At the stage of precise registration, the geometrical constraints are added for achieving the selection of the corresponding points, therebyreducing the number of iteration times of the precise registration, and reducing the probability that the algorithm falls into the local optimum.
Owner:JIANGNAN UNIV +1

Method and server for achieving insurance claim anti-fraud based on consistency of multiple pictures

ActiveCN105719188APrevent insurance fraud by exaggerating the degree of lossImage enhancementImage analysisCrucial pointFeature parameter
The invention discloses a method for achieving insurance claim anti-fraud based on consistency of multiple pictures.The method comprises the steps of dividing damage assessment pictures of the same position of a vehicle into the same set; obtaining key point features of all sets, grouping the assessment pictures of all the picture sets, and matching multiple related key points with the assessment pictures in each group; according to the related key points of each group, calculating a feature point transformation matrix of each group, and converting one of pictures in each group into a to-be-verified picture having the same shooting angle as another picture in the group through the corresponding feature point transformation matrix; conducting feature parameter matching on each to-be-verified picture and another picture in the corresponding group; when feature parameters are not matched, generating reminding information so as to remind that fraud practice exists in the received picture.The invention further provides a server applicable to the method.By means of the method and server for achieving insurance claim anti-fraud based on consistency of the multiple pictures, fraud insurance claim practice can be identified automatically.
Owner:PING AN TECH (SHENZHEN) CO LTD

Three-dimensional point cloud reconstruction device and method based on multi-fusion sensor

The invention discloses a three-dimensional point cloud reconstruction device and a three-dimensional point cloud reconstruction method based on a multi-fusion sensor. The device comprises a multi-fusion sensor module, a system data processing module and a three-dimensional point cloud reconstruction module. The method comprises the following steps: acquiring point cloud data and a video image byusing a multi-fusion sensor device, and processing the video image to obtain a target image including target posture, position and texture information; acquiring target point cloud corresponding to the original point cloud according to the acquired posture position information in the original point cloud and the target image; filtering the point clouds to remove messy point clouds; wherein the point cloud coarse registration comprises key point quality analysis; point cloud fine registration; and inputting the registered point cloud into a point cloud reconstruction unit to realize three-dimensional point cloud fusion and reconstruction. According to the method, the feature points corresponding to the target image reconstruction point cloud corresponding to the original three-dimensional point cloud are extracted to obtain the target three-dimensional space position information, texture and color information. Therefore, the point cloud registration precision is improved, and the falseidentification and tracking loss probability is effectively reduced.
Owner:SHENZHEN WEITESHI TECH

Multi-scale normal feature point cloud registering method

The invention relates to a multi-scale normal feature point cloud registering method. The multi-scale normal feature point cloud registering method is characterized by including the steps that two-visual-angle point clouds, including the target point clouds and the source point clouds, collected by a point cloud obtaining device are read in; the curvature of radius neighborhoods of three scales of points is calculated, and key points are extracted from the target point clouds and the source point clouds according to a target function; the normal vector angular deviation and the curvature of the key points in the radius neighborhoods of the different scales are calculated and serve as feature components, feature descriptors of the key points are formed, and a target point cloud key point feature vector set and a source point cloud key point feature vector set are accordingly obtained; according to the similarity level of the feature descriptors of the key points, the corresponding relations between the target point cloud key points and the source point cloud key points are preliminarily determined; the wrong corresponding relations are eliminated, and the accurate corresponding relations are obtained; the obtained accurate corresponding relations are simplified with the clustering method, and the evenly-distributed corresponding relations are obtained; singular value decomposition is carried out on the final corresponding relations to obtain a rigid body transformation matrix.
Owner:HARBIN ENG UNIV

Face recognition method based on deep transformation learning in unconstrained scene

The invention discloses a face recognition method based on deep transformation learning in an unconstrained scene. The method comprises the following steps: obtaining a face image and detecting face key points; carrying out transformation on the face image through face alignment, and in the alignment process, minimizing the distance between the detected key points and predefined key points; carrying out face attitude estimation and carrying out classification on the attitude estimation results; separating multiple sample face attitudes into different classes; carrying out attitude transformation, and converting non-front face features into front face features and calculating attitude transformation loss; and updating network parameters through a deep transformation learning method until meeting threshold requirements, and then, quitting. The method proposes feature transformation in a neural network and transform features of different attitudes into a shared linear feature space; by calculating attitude loss and learning attitude center and attitude transformation, simple class change is obtained; and the method can enhance feature transformation learning and improve robustness and differentiable deep function.
Owner:唐晖

Trajectory data-based signal intersection periodic flow estimation method

The invention relates to a trajectory data-based signal intersection periodic flow estimation method. The method comprises the following steps that: 1) the trajectory point data of sampled vehicles are acquired, and the key point information of the vehicles entering and leaving a queue is obtained; 2) a fitting method is adopted to estimate the queuing waves and evanescent waves of vehicle queuing, and the intersection point of the queuing waves and the evanescent waves is taken as the flow estimated value of queuing vehicles; 3) the density distribution function of full-cycle flow is obtainedaccording to the flow estimated value, and the proportion of non-stop vehicles in the full-cycle flow; and 4) a full-cycle flow estimation problem is transformed into a parameter estimation problem based on the Poisson distribution and M3 distribution of the non-queuing vehicles according to the density distribution function of the full-cycle flow, and a maximum likelihood estimation method is used to perform estimation, and the maximum-likelihood expectation-maximization method is adopted to perform solving, and the estimated value of the arrival flow of each cycle can be obtained. Comparedwith the prior art, the method of the present invention has the advantages of the fusion of model analysis and statistical analysis, the full use of trajectory information, wide applicability and thelike.
Owner:TONGJI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products