Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

182 results about "Level fusion" patented technology

How Spinal Fusion Surgery Works. Each level of the spine consists of the disc in front and two facets (joints) in the back. These structures work together to define a motion segment. When performing a fusion, for example L4 to L5, this is considered a one level fusion.

Spinous process stabilization device and method

A fixation device to immobilize a spinal motion segment and promote posterior fusion, used as stand-alone instrumentation or as an adjunct to an anterior approach. The device functions as a multi-level fusion system including modular single-level implementations. At a single-level the implant includes a pair of plates spanning two adjacent vertebrae with embedding teeth on the medially oriented surfaces directed into the spinous processes or laminae. The complementary plates at a single-level are connected via a cross-post with a hemi-spherical base and cylindrical shaft passed through the interspinous process gap and ratcheted into an expandable collar. The expandable collar's spherical profile contained within the opposing plate allows for the ratcheting mechanism to be correctly engaged creating a uni-directional lock securing the implant to the spine when a medially directed force is applied to both complementary plates using a specially designed compression tool. The freedom of rotational motion of both the cross-post and collar enables the complementary plates to be connected at a range of angles in the axial and coronal planes accommodating varying morphologies of the posterior elements in the cervical, thoracic and lumbar spine. To achieve multi-level fusion the single-level implementation can be connected in series using an interlocking mechanism fixed by a set-screw.
Owner:GINSBERG HOWARD JOESEPH +2

A Surface Defect Detection Method Based on Fusion of Gray Level and Depth Information

The invention relates to an on-line detecting method for surface defects of an object and a device for realizing the method. The accuracy for the detection and the distinguishing of the defects is improved through the fusion of grey and depth information, and the method and the device can be applied to the detection of the object with a complicated shape and a complicated surface. A grey image and a depth image of the surface of the object are collected by utilizing the combination of a single colored area array CCD (charge-coupled device) camera and a plurality of light sources with different colors, wherein obtaining of the depth information is achieved through a surface structured light way. The division and the defect edge extraction of the images are carried out through the pixel level fusion of the depth image and the grey image, so that the area where the defects are positioned can be detected more accurately. According to the detected area with the defects, the grey characteristics, the texture characteristics and the two-dimensional geometrical characteristics of the defects are extracted from the grey image; the three-dimensional geometrical characteristics of the defects are extracted from the depth image; further, the fusion of characteristic levels is carried out; and a fused characteristic quantity is used as the input of a classifier to classify the defects, thereby achieving the distinguishing of the defects.
Owner:UNIV OF SCI & TECH BEIJING

Multi-camera-based multi-objective positioning tracking method and system

The invention discloses a multi-camera-based multi-objective positioning tracking method. The method is characterized by comprising the following steps: installing a plurality of cameras at a plurality of visual angles firstly, planning a public surveillance area for the cameras, and calibrating a plurality of height levels; sequentially implementing the steps of foreground extraction, homography matrix calculation, foreground likelihood fusion and multi-level fusion; extracting positioning information which is based on selected a plurality of height levels and obtained in the step of foreground likelihood fusion; processing the positioning information of each level by using the shortest path algorithm so as to obtain the tracking paths of the levels; and after combining with the processing results of foreground extraction, completing the multi-objective three-dimensional tracking. By using the method disclosed by the invention, in the process of tracking, the vanishing points of the plurality of cameras are not required to be calculated, and a codebook model is introduced for the first time for solving the multi-objective tracking problem, thereby improving the accuracy of tracking; and the method has the characteristics of good stability, good instantaneity and high precision.
Owner:DALIAN NATIONALITIES UNIVERSITY

Method for identifying traffic status of express way based on information fusion

InactiveCN101706996ASolve the problem of low accuracy of traffic status recognitionReliable decision supportDetection of traffic movementSupport vector machineBinary tree
The invention discloses a method for identifying the traffic status of an express way based on information fusion, belonging to the technical field of traffic information fusion. The method comprises the steps: selecting traffic parameters, and establishing an evaluation index system of status identification; according to decision tree algorithm, establishing a binary tree structure of the traffic status identification; determining the fusion layer K of the binary tree structure, wherein K is more than or equal to 2 and i is equal to 1; according to sample format requirements of an ith layer, preprocessing data of the ith layer, and determining an input sample of the ith layer; utilizing a machine learning method of a support vector machine to train the input sample of the ith layer, thus obtaining the support vector machine; carrying out data fusion on the support vector machine to obtain a fusion result, and judging the support vector machine in next level fusion according to the fusion result; and if i is equal to i plus 1, judging whether that i is more than or equal to K is set, if so, returning the step 4, and executing the data fusion at next layer, and if not, ending the process. The method inherits the advantages of the information fusion method of the traditional support vector machine, and solves the problem of low accuracy of traffic status identification of the express way.
Owner:BEIJING JIAOTONG UNIV

Serial-parallel combined multi-mode emotion information fusion and identification method

The present invention discloses a serial-parallel combined multi-mode emotion information fusion and identification method belonging to the emotion identification technology field. The method mainly comprises obtaining an emotion signal; pre-processing the emotion signal; extracting an emotion characteristic parameter; and fusing and identifying the characteristic parameter. According to the present invention, firstly, the extracted voice signal and facial expression signal characteristic parameters are fused to obtain a serial characteristic vector set, then M parallel training sample sets are obtained by the sampling with putback, and sub-classifiers are obtained by the Adabost algorithm training, and then difference of every two classifiers is measured by a dual error difference selection strategy, and finally, vote is carried out by utilizing the majority vote principle, thereby obtaining a final identification result, and identifying the five human basic emotions of pleasure, anger, surprise, sadness and fear. The method completely gives play to the advantage of the decision-making level fusion and the characteristic level fusion, and enables the fusion process of the whole emotion information to be closer to the human emotion identification, thereby improving the emotion identification accuracy.
Owner:BOHAI UNIV

Personal identification method and near-infrared image forming apparatus based on palm vena and palm print

The invention provides a near-infrared imaging device and an identification method based on the palm vein and the palm print. Firstly, a palm image is obtained through a near-infrared imaging device, the central subblock sample needed to be processed is extracted, the subblock is inputted into two feature extraction modules: a genus palm print information code extraction and a vein structure extraction, then the two features are respectively matched, each own similarity of the two features is respectively calculated through using different similarity evaluation methods, the optimized weighted array of the genus palm print and the vein vessel structure is obtained according to a training sample, then the two similarities perform the similarity level fusion, then the similarity level after the fusing performs decision-making and comparing according to a scheduled threshold value, and then the final determination is obtained in reference to the fusioned matching. The near-infrared imaging device and the identification method based on the palm vein and the palm print can overcome the disadvantages of less image features and single processing, and has the advantages of improving the identification rate and the stability of the system.
Owner:深圳市中识健康科技有限公司

Finger vein recognition method fusing local features and global features

The invention discloses a finger vein recognition method fusing local features and global features. At present, a number of vein recognition methods adopt the local features of a vein image, so that the recognition precision of the vein recognition methods is greatly affected by the quality of the image; the phenomena of rejection and false recognition are liable to appear. The finger vein recognition method provided by the invention comprises the following steps: firstly, performing pretreatment operations such as finger area extraction of a read-in finger vein image, binarization and the like; then, according to the point set of extracted detail features, realizing the matching of the local features within a certain angle and a certain radius by virtue of a flexible matching-based local feature recognition module; using a global feature recognition module for vein image recognition to realize the matching of the global features as the global feature recognition module is used for analyzing bidirectional two-dimensional principal components and can better display a two-dimensional image data set on the whole; finally, designing weights according to the correct recognition rates of the two recognition methods, performing decision-level fusion to the results of two classifiers, and taking the fused result as a final recognition result. The method is applied to finger vein recognition.
Owner:HEILONGJIANG UNIV

Secondary classification fusion identification method for fingerprint and finger vein bimodal identification

The invention provides a secondary classification fusion identification method for fingerprint and finger vein bimodal identification. A fingerprint module and a vein module are used as primary classifiers, and a secondary decision module is used as a secondary classifier. The method comprises the following steps of: reading a fingerprint image and a vein image through the fingerprint module and the vein module; pre-processing the read images respectively and extracting characteristic point sets of the both; performing identification on the images respectively to obtain respective identification results, wherein the fingerprint identification adopts a detail point match-based method, and the vein identification uses an improved Hausdorff distance mode to perform identification; forming a new characteristic vector by using the extracted fingerprint and vein characteristic point sets in a characteristic series mode through the secondary decision module so as to form the secondary classifier and obtain an identification result; and finally, performing decision-level fusion on the three identification results. The method has the advantages of making full use of identification information of fingerprints and finger veins, and effectively improving the accuracy of an identification system, along with high identification rate.
Owner:HARBIN ENG UNIV

Traveling vehicle vision detection method combining laser point cloud data

ActiveCN110175576AAvoid the problem of difficult access to spatial geometric informationRealize 3D detectionImage enhancementImage analysisHistogram of oriented gradientsVehicle detection
The invention discloses a traveling vehicle vision detection method combining laser point cloud data, belongs to the field of unmanned driving, and solves the problems in vehicle detection with a laser radar as a core in the prior art. The method comprises the following steps: firstly, completing combined calibration of a laser radar and a camera, and then performing time alignment; calculating anoptical flow grey-scale map between two adjacent frames in the calibrated video data, and performing motion segmentation based on the optical flow grey-scale map to obtain a motion region, namely a candidate region; searching point cloud data corresponding to the vehicle in a conical space corresponding to the candidate area based on the point cloud data after time alignment corresponding to eachframe of image to obtain a three-dimensional bounding box of the moving object; based on the candidate region, extracting a direction gradient histogram feature from each frame of image; extracting features of the point cloud data in the three-dimensional bounding box; and based on a genetic algorithm, carrying out feature level fusion on the obtained features, and classifying the motion areas after fusion to obtain a final driving vehicle detection result. The method is used for visual inspection of the driving vehicle.
Owner:UNIV OF ELECTRONICS SCI & TECH OF CHINA

Audio/video keyword identification method based on decision-making level fusion

The invention relates to an audio/video keyword identification method based on decision-making level fusion. The method mainly includes the following steps that (1) a keyword audio/video is recorded, a keyword and non-keyword voice acoustic feature vector sequence and a visual feature vector sequence are obtained, and accordingly a keyword and non-keyword acoustic template and a visual template are trained; (2) acoustic likelihood and visual likelihood are obtained according to the audio/video in different acoustic noise environments, so that the acoustic mode reliability, visual mode reliability and optimal weight are obtained, and accordingly an artificial neural network can be trained; (3) secondary parallel keyword identification based on the acoustic mode and the visual mode is conducted on the audio/video to be detected according to the acoustic template, the visual template and the artificial neural network. According to the audio/video keyword identification method based on decision-making level fusion, the acoustic function and the visual function are fused at a decision-making level, the secondary parallel keyword identification based on the dual modes is conducted on the audio/video to be detected, the contribution of visual information in the acoustic noise environment is fully utilized, and therefore identification performance is improved.
Owner:PEKING UNIV SHENZHEN GRADUATE SCHOOL

City impervious surface extraction method based on fusion of SAR image and optical remote sensing image

Provided is a city impervious surface extraction method based on fusion of an SAR image and an optical remote sensing image. The method comprises that a general sample set formed by samples of a research area is selected in advance, and a classifier training set, a classifier test set and a precision verification set of impervious surface extraction results are generated from the general sample set in a random sampling method; the optical remote sensing image is configured with the SAR image of the research area, and features are extracted from the optical remote sensing image and the SAR image; training is carried out, the city impervious surface is extracted preliminarily on the basis of a random forest classifier, and optimal remote sensing image data, SAR image data and an impervious surface RF preliminary extraction result are obtained; decision level fusion is carried out by utilizing a D-S evidence theory synthesis rule, and a final impervious surface extraction result of the research area is obtained; and the precision of each extraction result is verified via the precision verification set. Advantages of the optical remote sensing image and SAR image data sources are utilized fully, the SAR image and optical remote sensing image fusion method based on the RF and D-S evidence theory is provided, and the impervious surface of higher precision in the city is obtained.
Owner:WUHAN UNIV

Front face human body automatic identity recognition method under long-distance video

The invention provides a front face human body automatic identity recognition method under a long-distance video. The method comprises a gait module and a human face module, and the method comprises the steps of firstly reading a video file, using the Adaboost method for detecting pedestrians, automatically opening the human face module and the gait module for respectively adopting the kernel principal component analysis on gait and human face for carrying out feature extraction if detecting the pedestrians, and finally adopting the decision-making level fusion method which adopts the human face features to assist the gait features for carrying out recognition. The method proposes a new solution concept for long-distance identity recognition and adopts the decision-making level fusion method which uses the human face features to assist the gait features. The human face features are assisted in the single-sample gait recognition, and the method has the advantages that even the gait training sample is the single sample and human face images are multiple, the number of the training samples can be expanded from another point of view, thereby being beneficial to the identity recognition, and improving the recognition precision by 2.4% by the fusion with the human face features.
Owner:HARBIN ENG UNIV

A ground object classification method and device based on a multispectral image and an SAR image

The invention relates to the field of ground object classification, in particular to a ground object classification method and device based on a multispectral image and an SAR image, and the method comprises the steps: obtaining the multispectral image of a preset area, and carrying out the multispectral image feature extraction of the multispectral image; obtaining a time sequence SAR image of apreset area, and performing time sequence SAR image feature extraction on the time sequence SAR image; and performing feature level fusion on the multispectral image features and the time sequence SARimage features to obtain a ground object classification result. According to the method and the device, the advantages of all-day working, all-weather working and short revisit period of the synthetic aperture radar SAR are utilized to obtain a long-time sequence SAR image, and the input feature dimension is increased; Characteristic level fusion is carried out on multispectral images and SAR images, and while multispectral information is fully utilized, ground feature interpretation is assisted by combining ground feature structures, textures and electromagnetic scattering characteristics reflected by time sequence SAR images.
Owner:SHENZHEN INST OF ADVANCED TECH CHINESE ACAD OF SCI

Method for image fusion based on representation learning

InactiveCN104851099AImplement image fusion technologyFast solutionImage enhancementImage analysisImage fusionBoltzmann machine
The invention discloses a method for image fusion based on representation learning, which comprises the steps of acquiring a multi-source image, learning features of the multi-source image through a learning framework of a deep neural network formed by a sparse adaptive encoder, a deep confidence network formed by a Boltzmann machine and a deep convolutional neural network, completing fusion of the multi-source image by using the automatically learned features, and establishing an image fusion model; studying a convex optimization problem of the image fusion model, and carrying out initialization on the networks by using unsupervised pre-training in deep learning, thereby enabling the networks to find an optimal solution quickly in the training process; and establishing a deep learning network for cooperative training according to the features of the multi-source image through two or more deep learning networks, thereby realizing an image fusion technology of representation learning. The method disclosed by the invention studies feature-level fusion of the image by using artificial intelligence and a deep learning based feature representation method. Compared with a traditional pixel-level fusion method, the method disclosed by the invention can better understand image information, and thus further improves the quality of image fusion.
Owner:ZHOUKOU NORMAL UNIV +1
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products