Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Improved extreme learning machine combining learning thought of least square vector machine

An extreme learning machine and least squares technology, applied in the field of artificial intelligence, which can solve problems such as overfitting

Active Publication Date: 2012-10-03
路亚科消防车辆制造有限公司
View PDF3 Cites 22 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the above algorithms are still based on the principle of empirical risk minimization, which can easily lead to overfitting problems.
In the literature "Huang G B, Ding X J, Zhou H M. Optimization method based extreme learning machine for classification. Neurocompting, 2010, 74(1-3): 155-163", "Liu Q, He Q, Shi Z. Extreme support Vector machine classifier.Lecture Notes in Computer Science, 2008,5012:222-233" also improved it, but the improved algorithm is only suitable for classification problems

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Improved extreme learning machine combining learning thought of least square vector machine
  • Improved extreme learning machine combining learning thought of least square vector machine
  • Improved extreme learning machine combining learning thought of least square vector machine

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0054] Example 1. Simulation data: "SinC"

[0055] "SinC" function expression:

[0056] y ( x ) = sin x / x , x ≠ 0 1 , x = 0

[0057] Data generation method: randomly generate 1000 training samples and 1000 test samples in the interval (-10, 10), and add random noise with a value range of [-0.2, 0.2] to all training samples, while the test data No noise; the experimental results of the three algorithms on the SinC dataset are shown in Table 1.

[0058] Table 1

[0059]

[0060] It can be obtained from Table 1 that because the ELM algorithm is b...

Embodiment 2

[0061] Example 2, Boston Housing data set

[0062] Boston Housing is a commonly used data set to measure the performance of regression algorithms, which can be obtained from the UCI database. It contains information about 506 commercial housing in the Boston Housing urban area, consisting of 12 continuous features, one discrete feature, and housing prices. The purpose of regression estimation is to predict the average price of houses through training on a part of the samples.

[0063] In the experiment, the sample set is randomly divided into two parts. The 256 sets of data in the training set are labeled samples, and the 250 sets of data in the test set are unlabeled samples. The experimental results of the three algorithms are shown in Table 2.

[0064] Table 2

[0065]

[0066] It can be seen from Table 2 that for the practical problem of multi-input and single-output in the Boston Housing dataset, the prediction errors of the ELM algorithm and the EOS-ELM algorithm are...

Embodiment 3

[0067] Example 3. Dissolved oxygen data set in actual fish farming

[0068] Dissolved oxygen is a very important water quality indicator in fish farming, and it plays an important role in the growth of fish. According to the actual situation, the experiment collected 360 sets of data from the Wuxi breeding base of the National Tilapia Industry Technology Research and Development Center as modeling data. The input data are pH value, temperature value, nitrate nitrogen value and ammonia nitrogen value, and the output data is dissolved oxygen value. The data is divided into 360 sets of 5-dimensional data through preprocessing, and the first 260 sets are selected as training data, and the last 100 sets are used as test data. The experimental results of the three algorithms are shown in Table 3.

[0069] table 3

[0070]

[0071] As can be seen from Table 3, the training errors of the three algorithms are very close, and the training errors of the first two algorithms are rel...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to an improved extreme learning machine, particularly relates to an improved extreme learning machine combining learning thought of a least square vector machine, and belongs to the technical field of the artificial intelligence. On the basis of minimizing empirical risk of the traditional extreme learning machine, the improved extreme learning machine provided by the invention is combined with the learning thought of the least square vector machine, and is additionally provided with a structural risk control item, and calculates the result through efficiently adjusting the ratio of two kinds of risks, so that the risk of the overfitting generated by a module is greatly lowered. Three experiments of practically applying the method in the Sinc data set, the Boston Housing data set and the oxygen dissolving forecasting in the aquaculture indicate that compared with the ELM algorithm and the EOS-ELM algorithm, the forecast error of the method is relatively approximate with the training error, so that the problem of the overfitting is efficiently lowered, and the forecast precision is enhanced to a certain extent.

Description

technical field [0001] The invention relates to an improved extreme learning machine, in particular to an improved extreme learning machine which integrates the idea of ​​least squares vector machine regression learning, and belongs to the technical field of artificial intelligence. Background technique [0002] Support Vector Machine (SVM) theory is a learning method based on statistical theory proposed by Vapnik et al. (in fact, it is also a single hidden layer feed-forward network). Improving the generalization ability of the learning machine ultimately boils down to solving a quadratic programming problem with linear inequality constraints. However, when the number of training samples increases, the quadratic programming problem will face the disaster of dimensionality. For details, refer to " Cortes C, Vapnik V. Support vector networks. Machine Learning, 1995, 20(3): 273-297". Therefore, Suykens et al. proposed the Least Squares Support Vector Machine (LS-SVM) to conve...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62
Inventor 毛力张立冬
Owner 路亚科消防车辆制造有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products