Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Data classification method based on depth-width variable multi-kernel learning

A technology of multi-core learning and data classification, which is applied in the field of data classification based on deep-width variable multi-core learning, and can solve problems such as the reduction of classification results

Pending Publication Date: 2020-10-02
HARBIN INST OF TECH
View PDF7 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, for the diverse data in reality, excessive or insufficient feature extraction will cause the classification results to decrease instead. The method should have the ability to select the structure and extract features for the data.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Data classification method based on depth-width variable multi-kernel learning
  • Data classification method based on depth-width variable multi-kernel learning
  • Data classification method based on depth-width variable multi-kernel learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0090] A data classification method based on deep-width variable multi-kernel learning, the learning method comprising the following steps:

[0091] Step 1: Preparation of the data set, randomly divide 50% of the samples in the data set as the training set to train the model parameters, and the remaining 50% of the samples are used as the test set to verify the performance of the algorithm. The data set with n samples is input to the algorithm Previously sorted into n×(m+1)-dimensional vectors, m is the number of features of the sample, the last dimension defaults to label information, and the data labels with M samples are 0~M;

[0092] Step 2: The algorithm structure for classifying data sets is as follows. The DWS-MKL algorithm combines the hierarchical cascading ideas of MKL and deep learning to construct a unified architecture for multi-core learning with multi-layer and multi-channel combinations. Each channel is independent of each other. The number of layers of the arc...

Embodiment 2

[0167] The method was used to classify 24 sub-datasets in the UCI dataset. Each data set is divided into training set and test set according to the ratio of 1:1. The combined kernel of each channel in each layer consists of four basic kernel functions, including linear kernel, RBF kernel and polynomial kernel (including 2nd-order polynomial and 3rd-order polynomial, and the free parameters are α=1 and β=1). The classifier is selected as the SVM standard classifier. In model training, the number of algorithm iterations is set to 100. The learning rate is lr=1E-5. The penalty coefficient of SVM is set to C∈[10 -1 ,10,10 2 ], and finally determined by 5-fold cross-validation. The algorithm is implemented using MATLAB, and the SVM classifier is implemented using the open source LIBSVM tool. For multi-classification tasks, the algorithm trains classifiers using a "one-vs-all" strategy. Use the trained model to verify the classification effect on the test set. Nine combined ...

Embodiment 3

[0181] The algorithm DWS-MKL proposed by the present invention is used for large-scale MNIST handwritten digit recognition. The MNIST dataset contains handwritten digits from 0 to 9, and the samples are all grayscale images of 28×28. The MNIST training set contains 50,000 samples, and the test set contains 10,000 samples. For the convenience of observation, randomly select 500 samples and use the T-SNE algorithm to reduce the data dimension to 2-D and 3-D such as Figure 4 shown. It can be seen from the figure that the various categories of the MNIST dataset are linearly inseparable. This example can prove that the DWS-MKL algorithm can handle high-dimensional linear inseparable data.

[0182] The experimental implementation method and hyperparameter settings are consistent with those in Example 1, and 5000 samples are randomly selected from the MNIST training set and test set as experimental data. Repeatedly run 10 groups of classification experiments, and calculate the a...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a data classification method based on depth-width variable multi-kernel learning. The method comprises the steps of 1, preparing a data set; step 2, providing an algorithm structure of data set classification; 3, carrying out the first classification of the data of the DWS-MKL algorithm in the step 2 through employing an SVM as a classifier; 4, after the data in the step 3is classified for the first time, performing kernel parameter learning; 6, performing data training by utilizing the steps; and 7, processing test set data by using the classification model obtained by training in the step 6, and obtaining the classification accuracy of the algorithm. According to the invention, the nonlinear mapping capability of the kernel method is brought into full play, the structure is flexibly changed according to the data, and the parameters are optimized by using the level-one-out error boundary, so that the classification accuracy of the method is improved.

Description

technical field [0001] The invention relates to the field of data classification, in particular to a data classification method based on depth-width variable multi-core learning. Background technique [0002] As an emerging machine learning technology, deep learning has been widely used in many fields due to its excellent performance, such as image processing, natural language processing, recommendation system, etc. However, the improvement of deep learning algorithm capabilities usually requires a large amount of data-driven. In the case of difficult or missing data, deep learning capabilities are limited and generalization is poor. In contrast, kernel methods work well for nonlinear classification of small datasets. Moreover, the kernel method can effectively avoid the "curse of dimensionality" when solving the linear solution problem in high-dimensional space in low-dimensional space. [0003] According to the selection method of the kernel function, the kernel method i...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06N3/045G06F18/2411G06F18/251G06F18/214Y02D10/00
Inventor 王婷婷何林李君宝刘劼苏华友赵菲
Owner HARBIN INST OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products