Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

655 results about "Support vector machine classifier" patented technology

Remote sensing image classification method based on multi-feature fusion

The invention discloses a remote sensing image classification method based on multi-feature fusion, which includes the following steps: A, respectively extracting visual word bag features, color histogram features and textural features of training set remote sensing images; B, respectively using the visual word bag features, the color histogram features and the textural features of the training remote sensing images to perform support vector machine training to obtain three different support vector machine classifiers; and C, respectively extracting visual word bag features, color histogram features and textural features of unknown test samples, using corresponding support vector machine classifiers obtained in the step B to perform category forecasting to obtain three groups of category forecasting results, and synthesizing the three groups of category forecasting results in a weighting synthesis method to obtain the final classification result. The remote sensing image classification method based on multi-feature fusion further adopts an improved word bag model to perform visual word bag feature extracting. Compared with the prior art, the remote sensing image classification method based on multi-feature fusion can obtain more accurate classification result.
Owner:HOHAI UNIV

Fuzzy clustering steel plate surface defect detection method based on pre classification

The invention relates to the technical field of digital image processing and pattern recognition, discloses a fuzzy clustering steel plate surface defect detection method based on pre classification and aims to overcome defects of judgment missing and mistaken judgment by the existing steel plate surface detection method and improve the accuracy of steel plate surface defect online real-time detection effectively during steel plate surface defect detection. The method includes the steps of 1, acquiring steel plate surface defect images; 2 performing pre classification on the images acquired through step 1, and determining the threshold intervals of image classification; 3, classifying images of the threshold intervals of the step 2, and generating white highlighted defect targets; 4, extracting geometry, gray level, projection and texture characteristics of defect images, determining input vectors supporting a vector machine classifier through characteristic dimensionality reduction, calculating the clustering centers of various samples by the fuzzy clustering algorithm, and adopting the distances of two cluster centers as scales supporting the vector machine classifier to classify; 5, determining classification, and acquiring the defect detection results.
Owner:CHONGQING UNIV

Electroencephalogram feature extracting method based on brain function network adjacent matrix decomposition

InactiveCN102722727AIgnore the relationshipIgnore coordinationCharacter and pattern recognitionMatrix decompositionSingular value decomposition
The invention relates to an electroencephalogram feature extracting method based on brain function network adjacent matrix decomposition. The current motion image electroencephalogram signal feature extraction algorithm mostly focuses on partially activating the qualitative and quantitative analysis of brain areas, and ignores the interrelation of the bran areas and the overall coordination. In light of a brain function network, and on the basis of complex brain network theory based on atlas analysis, the method comprises the steps of: firstly, establishing the brain function network through a multi-channel motion image electroencephalogram signal, secondly, carrying out singular value decomposition on the network adjacent matrix, thirdly, identifying a group of feature parameters based on the singular value obtained by the decomposition for showing the feature vector of the electroencephalogram signal, and fourthly, inputting the feature vector into a classifier of a supporting vector machine to complete the classification and identification of various motion image tasks. The method has a wide application prospect in the identification of a motion image task in the field of brain-machine interfaces.
Owner:启东晟涵医疗科技有限公司

Hard disk failure prediction method for cloud computing platform

The invention discloses a hard disk failure prediction method for a cloud computing platform. The hard disk failure predication method comprises the following steps: marking SMART log data of a hard disk as a normal hard disk sample and a faulted hard disk sample according to a hard disk maintenance record in a prediction time window; then, dividing the denoised normal hard disk sample into k non-intersected subsets by adopting a K-means clustering algorithm; combining the k non-intersected subsets with the faulted hard disk sample respectively; generating k groups of balance training sets according to an SMOTE (Synthetic Minority Oversampling Technique) so as to obtain k support vector machine classifiers for predicting the faulted hard disk. In the prediction stage, test sets can be clustered by using a DBSCAN (Density-based Spatial Clustering Of Applications With Noise), a sample in a clustered cluster is predicted as the normal hard disk sample, a noise sample is predicted by each classifier obtained by training, and further a final prediction result is obtained by voting. According to the method disclosed by the invention, hard disk fault prediction is carried out by using the SMART data of the hard disk, and relatively high fault recall ratio and overall performance can be obtained.
Owner:NANJING UNIV

Emotion analyzing system and method

InactiveCN103034626AEasy to judgeImproving the performance of sentiment orientation classificationSpecial data processing applicationsViewpointsSupport vector machine classifier
The invention discloses an emotion analyzing system and an emotion analyzing method. The system comprises a language database establishing module, a data preprocessing module, a perspective sentence identifying module and an emotion tendency analyzing module, wherein the language database establishing module is used for establishing a training set needed by perspective sentence identification and emotion tendency analysis; the data preprocessing module is used for preprocessing sentences in the training set; the perspective sentence identifying module is used for performing perspective sentence identification on the preprocessed sentences by adopting a support vector machine classifier and a Bayes classifier respectively, and integrally processing results of the classifiers to obtain a final classifying result; and the emotion tendency analyzing module is used for directly classifying the preprocessed sentences into positive, negative and non-viewpoint sentences respectively on the basis of the support vector machine classifier and the Bayers classifier, and integrating the classifying results of the vector machine classifier and the Bayers classifier through an integration formula to obtain a classifying result of a current sentence. Due to the adoption of the emotion analyzing system and the emotion analyzing method, the viewpoint sentence judging and emotion tendency classifying properties of Chinese microblogs can be improved.
Owner:SHANGHAI JIAO TONG UNIV

Portrait and vehicle recognition alarming and tracing method

The invention relates to an alarming and tracking method for the identification of the portraits and vehicles; the technical characteristics lie in: first, conducting a target detection, extracting the altered target area from a video image, and extracting an eigenvector of the target area through a method of central beam; then determining whether the target is an image of human body or vehicle through a well-trained support vector machine classifier according to the extracted eigenvector. If the abnormal conditions occur, alarm signals are given. Meanwhile, according to the track continuity of the target during the movement, a technique of particle filter is adopted to carry out a track processing only for regions where the targets have possibility to exist. The alarming and tracking method for the identification of the portraits and vehicles has the advantages that: when the target is seriously interfered or influenced by the noise, which causes lower matching reliability, a forecast can be applied to reasonably estimate the position of the target, in order to keep a normal tracking for the target. The alarming and tracking method for the identification of the portraits and vehicles features small computational amount and excellent real-time performance, and can be widely applied in the field of national defense and civil field.
Owner:NORTHWESTERN POLYTECHNICAL UNIV

Brain electric features based emotional state recognition method

The invention discloses a brain electric features based emotional state recognition method. The method comprises the following steps of: data acquisition stage: under the condition of international emotional picture induction, extracting 64 brain electric data which is tested under the induction of different-happiness-level pictures; data pretreatment stage: carrying out four stages of reference electric potential variation, down sampling, band-pass filtering, electro-oculogram removal on the collected 64 brain electric data; feature extraction stage: extracting time domain features after signals after pretreatment are filtered by a common space model algorithm; and feature recognition: recognizing the features by using a support vector machine classifier, and differentiating different emotional states. According to the method, an OVR (one versus rest) common space model algorithm is used for removing the interference of background signals, and is used for the signal intensification of multiple types of emotion induced brain electricity; after the background signals are removed, the differences among different types of emotional brain electricity are intensified, the recognition accurate ratio of subjects is relatively ideal when the recognition is carried out by the time domain variance features, and the emotions of different happiness can be differentiated accurately.
Owner:TIANJIN UNIV

Multi-aspect deep learning expression-based image emotion classification method

The invention discloses a multi-aspect deep learning expression-based image emotion classification method. The method comprises the following steps of: (1) designing an image emotion classification model: the image emotion classification model comprises a parallel convolutional neural network model and a support vector machine classifier which is used for carrying out decision fusion on network features; (2) designing a parallel convolutional neural network structure: the parallel convolutional neural network structure comprises 5 networks with same a structure, and each network comprises 5 convolutional layer groups, a full connection layer and a softmax layer; (3) carrying out significant main body extraction and HSV format conversion on an original image; (4) training the convolutional neural network model; (5) fusing image emotion features learnt and expressed by the plurality of convolutional neural networks, and training the SVM classifier to carry out decision fusion on the image emotion features; and (6) classifying user images by using the trained image emotion classification model so as to realize image emotion classification. According to the method disclosed by the invention, the obtained image emotion classification result accords with the human emotion standard, and the judgement process is free of artificial participation, so that machine-based full-automatic image emotion classification is realized.
Owner:SOUTH CHINA UNIV OF TECH

Pulmonary nodule detection device and method based on shape template matching and combining classifier

A pulmonary nodule detection device and method based on a shape template matching and combining classifier comprises an input unit, a pulmonary parenchyma region processing unit, a ROI (region of interest) extraction unit, a coarse screening unit, a feature extraction unit and a secondary detection unit. The input unit is used for inputting pulmonary CT sectional sequence images in format DICOM; the pulmonary parenchyma region processing unit is used for segmenting pulmonary parenchyma regions from the CT sectional sequence images, repairing the segmented pulmonary parenchyma regions by the boundary encoding algorithm and reconstructing the pulmonary parenchyma regions by the surface rendering algorithm after the three-dimensional observation and repairing; the ROI extraction unit is used for setting a gray level threshold and extracting the ROI according to the repaired pulmonary parenchyma regions; the coarse screening unit is used for performing coarse screening on the ROI by the pulmonary nodule morphological feature design template matching algorithm and acquiring selective pulmonary nodule regions; the feature extraction unit is used for extracting various feature parameters as sample sets for the post detection according to selective nodule gray levels and morphological features; the secondary detection unit is used for performing secondary detection on the selective nodule regions through a vector machine classifier and acquiring the final detection result.
Owner:KANGDA INTERCONTINENTAL MEDICAL EQUIP CO LTD

Transfer learning and feature fusion-based ultrasonic thyroid nodule benign and malignant classification method

ActiveCN106780448ADescribe the characteristics of the caseAvoiding Obstacles That Cannot Train Convolutional Neural NetworksImage enhancementImage analysisSonificationSupport vector machine classifier
The invention discloses a transfer learning and feature fusion-based ultrasonic thyroid nodule benign and malignant classification method. The method comprises the following steps of firstly preprocessing an ultrasonic image and zooming the ultrasonic image to a uniform size; extracting traditional low-level features of the ultrasonic image; extracting high-level semantic features of the ultrasonic image by using a model obtained in a natural image through deep neural network training through a transfer learning method; fusing the low-level features with the high-level features; carrying out feature screening by utilizing distinction degree of benign and malignant thyroid nodules so as to obtain a final feature vector which is used for training a support vector machine classifier; and carrying out final thyroid nodule benign and malignant classification. According to the method disclosed by the invention, the low-level features and the high-level features are fused, and salient feature screening is carried out, so that the problem that the ability of single features for describing thyroid nodule features on the level of semantic meaning is insufficient is solved, and the classification precision is effectively improved; and through importing the transfer learning, the problems that the medical sample images are few and the deep features can not be obtained by direct training are solved.
Owner:TSINGHUA UNIV +1

Visual capture method and device based on depth image and readable storage medium

InactiveCN107748890AImprove robustnessTexture features, so that the system can not only recognize lessCharacter and pattern recognitionCluster algorithmPattern recognition
The invention discloses a visual capture method and device based on a depth image and a readable storage medium. The method comprises steps that a point cloud image is acquired through a depth cameraKinect, the acquired point cloud image is segmented through an RANSAN random sampling consensus algorithm and an Euclidean clustering algorithm, and an identification-needing target object is acquiredthrough segmentation; 3D global characteristics and color characteristics of the object are respectively extracted and are fused to form a new global characteristic; off-line training of a multi-class support vector machine classifier SVM is carried out through utilizing the new global characteristic of the object, category of the target object is identified through utilizing the trained multi-class support vector machine classifier SVM according to the new global characteristic; then the category and the grasping position of the target object are determined; and lastly, according to the category of the target object and the grasping position of the target object, a manipulator and a gripper are controlled for grasping the target object to the specified position. The method is advantagedin that the target object can be accurately identified and grasped.
Owner:SHANTOU UNIV

Expression identification method fusing depth image and multi-channel features

The invention discloses an expression identification method fusing a depth image and multi-channel features. The method comprises the steps of performing human face region identification on an input human face expression image and performing preprocessing operation; selecting the multi-channel features of the image, extracting a depth image entropy, a grayscale image entropy and a color image salient feature as human face expression texture information in the texture feature aspect, extracting texture features of the texture information by adopting a grayscale histogram method, and extracting facial expression feature points as geometric features from a color information image by utilizing an active appearance model in the geometric feature aspect; and fusing the texture features and the geometric features, selecting different kernel functions for different features to perform kernel function fusion, and transmitting a fusion result to a multi-class support vector machine classifier for performing expression classification. Compared with the prior art, the method has the advantages that the influence of factors such as different illumination, different head poses, complex backgrounds and the like in expression identification can be effectively overcome, the expression identification rate is increased, and the method has good real-time property and robustness.
Owner:CHONGQING UNIV OF POSTS & TELECOMM

Universal voice awakening identification method and system under full-phoneme frame

The invention discloses a universal voice awakening identification method and a system under a full-phoneme frame. The method comprises the following steps: firstly, training a deep neural network acoustic model, modifying a dictionary according to awakening words, constructing a decoding network based on the filler, and training a support vector machine classifier according to training samples; preprocessing the input voice, inputting processed voice characteristics into a decoding network for decoding the processed voice characteristics, calculating an acoustic score according to the deep neural network acoustic model, and obtaining decoding results; inputting the statistical magnitude of successfully recognized decoding results into the support vector machine classifier for classification, and obtaining a final recognition result. According to the method disclosed by the invention, triphonon states obtained through the extension of all atonal phonemes are subjected to modeling, andthen a universal acoustic model is obtained. During the decoding process, a decoding path is limited. Therefore, the awakening performance can be improved. Meanwhile, the later-stage processing part is combined to analyze the multi-dimensional statistics of the phoneme posterior probability and the like on each path. As a result, the hidden danger that the false alarm rate is increased is eliminated.
Owner:INST OF ACOUSTICS CHINESE ACAD OF SCI +1

Automatic optical inspection method for printed circuit board comprising resistance element

The invention relates to an automatic optical inspection method for a printed circuit board comprising a resistance element. After the characteristics of welding spots are extracted, the welding spots are correctly classified into three types of normality, starved solder and missing parts through a classifier of a support vector machine. The automatic optical inspection method is suitable for classification and detection of the special welding spots during production. The automatic optical inspection method comprises the following steps of: converting red areas in welding spot images into a grayscale image and a binary image; calculating grayscale image-based mean values and standard deviations, and binary image-based height-lightness ratio, cross correlation and area of area color; and classifying the welding spots according to the quality by using the mean values, the variance, the height-lightness ratio and the similarity level of the welding spot images in the classifier of the support vector machine. Wrong welding spot types are classified through the mean values, the variance, the height-lightness ratio and the area characteristics of the welding spots. After the quality of the welding spots is distinguished, the wrong welding spots can be further classified into two types of starved solder and missing parts by the method.
Owner:SOUTH CHINA UNIV OF TECH

Support vector machine sorting method based on simultaneously blending multi-view features and multi-label information

The invention discloses a support vector machine sorting method based on simultaneously blending multi-view features and multi-label information. The support vector machine sorting method based on simultaneously blending the multi-view features and the multi-label information comprises the following steps, inputting multi-view feature training data and the multi-label information corresponding to each data, establishing a mathematical model which simultaneously blends the multi-view features and the multi-label information and supports a vector machine classifier, and setting value of a corresponding weight factor of each item. Training and learning each parameter of a classifier, using loop iteration interactive algorithm to update all parameter variables of target optimization formula until absolute value of the difference of whole objective function values of two iterative is less than preset threshold valve, stopping. Meanwhile, when a parameter is adopted, updated and calculated, strategy fixing other parameter values is adopted. The classifier which is obtained by training conducts multi-label classification or precasting on actual data. When technology supports classification of a vector machine, a unified data expression form in a novel data space is learned, and accuracy rate of the classifier is improved.
Owner:ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products