Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

1511 results about "Svm classifier" patented technology

The SVM classifier is a powerful supervised classification method. It is well suited for segmented raster input but can also handle standard imagery. It is a classification method commonly used in the research community.

Effective multi-class support vector machine classification

An improved method of classifying examples into multiple categories using a binary support vector machine (SVM) algorithm. In one preferred embodiment, the method includes the following steps: storing a plurality of user-defined categories in a memory of a computer; analyzing a plurality of training examples for each category so as to identify one or more features associated with each category; calculating at least one feature vector for each of the examples; transforming each of the at least one feature vectors so as reflect information about all of the training examples; and building a SVM classifier for each one of the plurality of categories, wherein the process of building a SVM classifier further includes: assigning each of the examples in a first category to a first class and all other examples belonging to other categories to a second class, wherein if any one of the examples belongs to another category as well as the first category, such examples are assigned to the first class only; optimizing at least one tunable parameter of a SVM classifier for the first category, wherein the SVM classifier is trained using the first and second classes; and optimizing a function that converts the output of the binary SVM classifier into a probability of category membership.
Owner:KOFAX

Automobile surface damage classification method and device based on deep learning

The invention relates to the field of image detection, and especially relates to an automobile surface damage classification method and device based on deep learning. According to the method and the device, the classification method and device are provided for the problems in the prior art. Characteristic learning and classification are carried out on input to-be-detected images. Specifically, candidate areas are extracted from the to-be-detected images to by employing an area selective search algorithm, and location information of the candidate areas are recorded; the to-be-detected images are input into a characteristic diagram extraction network model without an output layer, thereby extracting the characteristic vectors of the candidate areas of the to-be-detected images; the characteristic vectors of the candidate areas are input into an SVM classifier to find target characteristic vectors; the locations of the corresponding candidate areas on the to-be-detected images, namely, the target areas of the to-be-detected images, are found according to the locations of the target characteristic vectors in the characteristic diagram; and the target areas of the to-be-detected images are input into an optimum classification network model, and the probabilities of the areas on damage levels are output.
Owner:高前文

Special audio event layered and generalized identification method based on SVM (Support Vector Machine) and GMM (Gaussian Mixture Model)

The invention relates to a special audio event layered and generalized identification method based on a combination of an SVM (Support Vector Machine) and a GMM (Gaussian Mixture Model), and belongs to the technical field of a computer and audio event identification. The special audio event layered and generalized identification method comprises the following steps of: firstly, obtaining an audio characteristic vector file of a training sample; secondly, respectively carrying out model training on a great quantity of audio characteristic vector files (of the training samples) with various types by using a GMM method and an SVM method, so as to obtain the GMM model with generalization capability and an SVM classifier, and complete offline training; and finally, carrying out layered identification on the audio characteristic vector files to be identified by using the GMM model and the SVM classifier. With the adoption of the method provided by the invention, the problems that the conventional special audio event identification is low in identification efficiency on a continuous audio stream, very short in continuing time, high in audio event false dismissal probability can be solved. The method can be applied to searching a special audio and monitoring a network audio based on contents.
Owner:BEIJING INSTITUTE OF TECHNOLOGYGY

Improved CNN-based facial expression recognition method

The invention provides an improved CNN-based facial expression recognition method, and relates to the field of image classification and identification. The improved CNN-based facial expression recognition method comprises the following steps: s1, acquiring a facial expression image from a video stream by using a face detection alignment algorithm JDA algorithm integrating the face detection and alignment functions; s2, correcting the human face posture in a real environment by using the face according to the facial expression image obtained in the step s1, removing the background information irrelevant to the expression information and adopting the scale normalization; s3, training the convolutional neural network model to obtain and store an optimal network parameter before extracting feature of the normalized facial expression image obtained in the step s2; s4 loading a CNN model and the optimal network parameters obtained by s3 for the optimal network parameters obtained in the steps3, and performing feature extraction on the normalized facial expression images obtained in the step s2; s5, classifying and recognizing the facial expression features obtained in the step s4 by using an SVM classifier. The method has high robustness and good generalization performance.
Owner:CHONGQING UNIV OF POSTS & TELECOMM

Method for early warning of sensitive client electric energy experience quality under voltage dip disturbance

InactiveCN103487682AReduce the risk of electricity supply and useAccurately monitor power quality disturbancesElectrical testingNormal densitySvm classifier
The invention provides a method for early warning of sensitive client electric energy experience quality under the voltage dip disturbance. The method comprises the steps that based on the S conversion rapid algorithm and an increment SVM classifier, voltage dip disturbances of sensitive clients are automatically identified; based on identification results of the voltage dip disturbances, voltage tolerance curves of devices corresponding to multiple types of sensitive clients at different load levels are determined; historical monitoring data of the voltage dip disturbances serve as samples, the samples are converted into sample values of a voltage dip amplitude ponderance index MSI and a lasting time ponderance index DSI, a probability density function of the MSI and the DSI is determined on the basis of the maximum entropy principle, the sensitive device fault probability is evaluated, and the probabilities of the sensitive devices corresponding to the sensitive clients at the voltage dip level are obtained. By the adoption of the method for early warning of sensitive client electric energy experience quality under the voltage dip disturbance, the electric energy quality disturbance condition can be accurately monitored, whether a client load is influenced by the disturbance or not is determined according to the load sensitivity degree of each client, and potential risks of load operation are found.
Owner:SHENZHEN POWER SUPPLY BUREAU +1

Identity identification method and apparatus based on combination of gait and face

The invention relates to an identity identification method and apparatus with the combination of the gait and the face based on deep learning. The apparatus comprises a video acquisition and preprocessing module, a gait feature extraction module, a face feature extraction module and an identification module. The method includes: acquiring a video stream, and performing pedestrian detection and tracking and face detection on the video stream; performing gait feature extraction on human body images, and calculating quality evaluation scores of gait features; performing face feature extraction onface images, calculating quality evaluation scores of face features, and regarding the face image with the highest quality evaluation score as a to-be-identified face image; and performing weightingon the gait features and the face features according to respective quality scores, and inputting the weighted scores into a SVM classifier for identity identification. According to the method and theapparatus, the quality evaluation scores of the gait features and the face features are calculated, weighting is performed according to the quality scores, advantages of face identification and gait identification are combined, the two identification technologies are complemented, the robustness of an identification system is increased, and the accuracy of figure identity identification is improved.
Owner:武汉神目信息技术有限公司

Multi-aspect deep learning expression-based image emotion classification method

The invention discloses a multi-aspect deep learning expression-based image emotion classification method. The method comprises the following steps of: (1) designing an image emotion classification model: the image emotion classification model comprises a parallel convolutional neural network model and a support vector machine classifier which is used for carrying out decision fusion on network features; (2) designing a parallel convolutional neural network structure: the parallel convolutional neural network structure comprises 5 networks with same a structure, and each network comprises 5 convolutional layer groups, a full connection layer and a softmax layer; (3) carrying out significant main body extraction and HSV format conversion on an original image; (4) training the convolutional neural network model; (5) fusing image emotion features learnt and expressed by the plurality of convolutional neural networks, and training the SVM classifier to carry out decision fusion on the image emotion features; and (6) classifying user images by using the trained image emotion classification model so as to realize image emotion classification. According to the method disclosed by the invention, the obtained image emotion classification result accords with the human emotion standard, and the judgement process is free of artificial participation, so that machine-based full-automatic image emotion classification is realized.
Owner:SOUTH CHINA UNIV OF TECH

Action detection model based on convolutional neural network

The invention discloses an action detection model based on a convolutional neural network, and belongs to the field of computer visual studies. An efficient action detection model is constructed by using the convolutional neural network in deep learning, thereby recognizing an action from video and detecting and positioning the action. The action detection model is composed of a Faster RCNN (Regional Convolutional Neural Network), an SVM (Support Vector Machine) classifier and an action pipeline. Each part of the action detection model respectively completes the corresponding operation. The Faster RCNN acquires a plurality of regions of interest from each frame of picture, and extracts a feature from each region of interest. The detection model extracts the features by adopting a double-channel model, namely a Faster RCNN channel based on a frame picture and a Faster RCNN channel based on an optical flow picture, which are respectively used for extracting an appearance feature and an action feature. Then, the two features are fused into a time-space domain feature, the time-space domain feature is input to the SVM classifier, and an action type prediction value of the corresponding region is given by SVM classification. Finally, a final action detection result is given by the action pipeline from the perspective of video.
Owner:BEIJING UNIV OF TECH

Human body action recognition method based on a TP-STG framework

ActiveCN109492581AReduce prediction lossPrevent early divergenceCharacter and pattern recognitionInternal combustion piston enginesHuman bodySvm classifier
The invention discloses a human body action recognition method based on a TP-STG framework, which comprises the following steps: taking video information as input, adding priori knowledge into an SVMclassifier, and providing a posteriori discrimination criterion to remove a non-personnel target; segmenting a personnel target through a target positioning and detection algorithm, outputting the personnel target in a target frame and coordinate information mode, and providing input data for human body key point detection; utilizing an improved posture recognition algorithm to carry out body partpositioning and correlation degree analysis so as to extract all human body key point information and form a key point sequence; a space-time graph is constructed on a key point sequence through an action recognition algorithm, the space-time graph is applied to multi-layer space-time graph convolution operation, action classification is carried out through a Softmax classifier, and human body action recognition in a complex scene is achieved. According to the method, the actual scene of the ocean platform is combined for the first time, and the provided TP-STG framework tries to identify worker activities on the offshore drilling platform for the first time by using methods of target detection, posture identification and space-time diagram convolution.
Owner:CHINA UNIV OF PETROLEUM (EAST CHINA)

Floating vehicle information processing method under parallel road network structure

The invention relates to a method for processing the information of float cars under the structure of a parallel road network, which is mainly applied to the service field of the intelligent dynamic traffic information. The method comprises the steps: step 1, an SVM (Support Vector Machine) sorter is adopted to judge the matching relationship between sampling points and roads, and map match is carried out; step 2, according to map match information, a heuristic route deduction algorithm is adopted to deduce possible running routes of a car, and according to the properties of road chains contained in the routes, whether the car runs in the structure of a parallel road network or not is judged; the shortest running route or a main running route and a subsidiary running route are selectively output in different circumstances, and the reliability of average running speed information provided by the car is estimated; step 3, the D-S (Dempster-Shafer) evidence reasoning theory based on classification is adopted to merge road condition information provided by all the float cars passing by a certain road chain in a current period and consider the reliability of the road condition information in a merging process so as to obtain the mathematical expectations of the average speed and the travel time of the float cars passing by the road chain in the current period. On the basis that the data quality of the prior float cars is kept unchanged, the invention realizes the acquisition of the real-time dynamic traffic information under the structure of a parallel road network.
Owner:BEIHANG UNIV

Image block deep learning characteristic based infrared pedestrian detection method

ActiveCN106096561AAddresses poor selection algorithm performanceSufficient dataCharacter and pattern recognitionVisual technologyData set
The invention relates to an image block deep learning characteristic based infrared pedestrian detection method, and belongs to the technical fields of image processing a computer vision. According to the method, a data set is divided into a training set and a test set. In a training stage, firstly, small image blocks are extracted in a sliding manner on positive and negative samples of the infrared pedestrian data set, clustering is carried out, and one convolutional neural network is trained for each type of image blocks; and then feature extraction is carried out on the positive and negative samples by using the trained convolutional neural network group, and an SVM classifier is trained. In a test stage, firstly, a region-of interest is extracted for a test image, then feature extraction is carried out on the region-of-interest by using the trained convolutional neural network group, and finally prediction is performed by using the SVM classifier. The infrared pedestrian detection method achieves a purpose of pedestrian detection via a mode of detecting whether each region-of-interest belongs to a pedestrian region or not, so that pedestrians in an infrared image can be detected accurately under the conditions such that the detection scene is complicated, the environment temperature is high, and the pedestrians vary greatly in scale attitude, and the method provides support for research in follow-up related fields such as intelligent video.
Owner:CHONGQING UNIV OF POSTS & TELECOMM

Pedestrian detection method based on video processing

The invention relates to a pedestrian detection method based on video processing. The pedestrian detection method comprises the steps of (1) extracting a foreground image, extracting a moving object image of each frame of a video, marking the image and storing the image into a storage in sequence, using a background model to extract a background, enabling the model to adopt the gauss mixing model, (2) conducting preliminary screening on the foreground, selecting shape features of a pedestrian for conducting identification, (3) accurately identifying the foreground, selecting HOGs to conduct feature extraction on the foreground image after preliminary screening, then using a low dimensionality soft output SVM pedestrian classifier to conduct classification, and judging whether the pedestrian exists or not. The pedestrian detection method further comprises the step of (4) conducting error correction processing in a secondary thread. As for the foreground image with low dimensionality soft output SVM pedestrian classifier soft output results which are ambiguous in belonging classification, a high dimensionality SVM classifier is called in the secondary thread for recognition processing. The pedestrian detection method based on video processing improves the detection accuracy and is good in real-time performance.
Owner:ZHEJIANG ZHIER INFORMATION TECH

Dressing safety detection method for worker on working site of electric power facility

The invention discloses a dressing safety detection method for a worker on a working site of an electric power facility. An SVM (support vector machine) classifier is trained based on HOG (histogram of oriented gradients) characteristics to identify the worker on the working site of the electric power facility and judge whether the worker is neatly dressed or not based on a worker identification result. The method comprises the following steps of detecting a worker target appearing on the working site of the electric power facility by training a HOG-characteristic-based classifier, and judging whether dressing and equipment of the worker target meet safety requirements on the working site or not based on the identified worker target, mainly comprising safety items such as whether a helmet is worn or not, whether safety clothes are completely worn (without exposed skin) or not and whether the worker on a pole transformer correctly wears a safety belt or not. According to the method, the dressing of the worker can be detected in advance before the worker enters the working site, and an additional worker for supervision is not required to be deployed; in addition, if the dressing of the worker is inconsistent with norms, the worker is early-warned and prompted, so that safety accidents caused by nonstandard dressing are avoided, and potential safety hazards are eliminated.
Owner:STATE GRID CORP OF CHINA +6

Method for extracting and fusing time, frequency and space domain multi-parameter electroencephalogram characters

The invention relates to a method for extracting and fusing time, frequency and space domain multi-parameter electroencephalogram characters, which comprises the following steps: 1) collecting an electroencephalogram signal; 2) performing data pre-processing on the electroencephalogram signal; 3) extracting Kc complexity, approximate entropy and wavelet entropy from the pre-processed data; 4) on the basis of AMUSE algorithm, acquiring an electroencephalogram singular value decomposition matrix parameter; 5) performing character selection on the time, frequency and space domain character parameters for the extracted Kc complexity, approximate entropy, wavelet entropy and electroencephalogram singular value decomposition matrix parameters; 6) utilizing a SVM classifier to fuse and classify the four parameters of the time, frequency and space domains after the character selection. According to the method provided by the invention, the Kc complexity, the approximate entropy, the wavelet entropy and the electroencephalogram singular value decomposition matrix parameter can be selected for comprehensively presenting electroencephalogram character information, and then subsequent effective fusion is performed, so that effective support and help can be supplied to early diagnosis assessment for the brain functional disordered diseases, such as, Alzheimer disease, mild cognitive impairment, and the like.
Owner:秦皇岛市惠斯安普医学系统股份有限公司 +1

License plate recognition method based on deep convolutional neural network

The invention belongs to the technical field of image processing and mode recognition and particularly relates to a license plate recognition method based on a deep convolutional neural network. The method includes: performing license plate detection on vehicle images, performing image segmentation on detected license plates to obtain license plate characters, using the license plate characters as training samples to obtain a training sample block set, inputting the training sample block set into a deep auto-encoder to train the deep auto-encoder, using the trained deep auto-encoder as the convolution kernel of the convolutional neural network, extracting the convolution features of the training sample block set, performing pooling operation on the convolution features of the training sample block set to obtain feature vectors, performing normalization processing on the feature vectors, loading the feature vectors after the normalization processing into an SVM classifier to train the SVM classifier, and testing to-be-recognized vehicles. By the method, license plate recognition accuracy can be increased, and license plate character recognition rate and robustness can be increased when the license plate characters are located in severe environments.
Owner:ANHUI SUN CREATE ELECTRONICS

Polarized SAR (synthetic aperture radar) image classification method based on depth PCA (principal component analysis) network and SVM (support vector machine)

The invention discloses a polarized SAR (synthetic aperture radar) image classification method based on a depth PCA (principal component analysis) network and an SVM (support vector machine) classifier. The polarized SAR image classification method includes filtering a polarized SAR image, extracting a shape feature parameter, a scattering feature parameter, a polarization feature parameter and independent elements of a covariance matrix C, and combing and normalizing into new high-dimensional features serving as data to be processed in a next step; according to actual ground feature flags, randomly selecting 10% of data with flags from each type to serve as training samples; whitening the training samples to serve as input to train a first layer of the network, taking a result as input of a second layer to train the second layer of the network, and performing binaryzation and histogram statistics on an output result; taking output of the depth PCA network as a finally learned feature training SVM classifier; whitening test samples, and inputting the test samples into a trained network framework to predict and calculate accuracy; coloring and displaying a classified image and outputting a final result.
Owner:XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products