Supercharge Your Innovation With Domain-Expert AI Agents!

Sparse self-encoder fast training method based on pseudo inverse learning

A sparse autoencoder and sparse autoencoder technology, applied in neural learning methods, neural architectures, biological neural network models, etc., can solve the problems of time-consuming training process, lack of theoretical basis, and inability to learn useful features, etc. The effect of reducing degrees of freedom and fast calculation speed

Inactive Publication Date: 2017-12-15
BEIJING NORMAL UNIVERSITY
View PDF0 Cites 15 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Since these algorithms iteratively update the model parameters based on each individual training sample (or a small part of the sample), it often takes multiple iterations to obtain the solution to the optimization problem. When the amount of data is large, the training process is extremely time-consuming
Secondly, the training algorithm involves many control parameters, such as the maximum number of iterations (maximum epoch), learning step length (step length), weight decay factor (weight decay), momentum factor (momentum), these parameters directly affect the final training results, But how to set these control parameters lacks theoretical basis
When these parameters are not selected properly, useful features cannot be learned, and the sparsity of the hidden layer output h cannot be guaranteed.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Sparse self-encoder fast training method based on pseudo inverse learning
  • Sparse self-encoder fast training method based on pseudo inverse learning
  • Sparse self-encoder fast training method based on pseudo inverse learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0039] The present invention provides a method for quickly training sparse self-encoder networks in order to overcome the shortcomings of existing training algorithms for sparse self-encoder neural networks. In order to make the purpose, technical solutions and advantages of the present invention clearer, the following in conjunction with specific embodiments and appended figure 1 The method is described in further detail. It should be understood that the descriptions of specific embodiments here are only used to explain the present invention, and are not intended to limit the present invention.

[0040] Specifically, see figure 1 , is a flowchart of a method for quickly training a sparse autoencoder network according to an embodiment of the present invention. For the training sample set composed of N d-dimensional samples Expressed as a matrix X=[x 1 ,x 2 ,...,x N ],in represents the i-th training sample. The fast training method of the sparse autoencoder based on p...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a sparse self-encoder fast training method based on pseudo inverse learning, aiming to overcome the defects of a deep neural network training algorithm in the prior art and to provide a method for sparse self-encoding neural network fast training. The method for sparse self-encoder fast training adopts a pseudo inverse learning algorithm and the biased ReLU activation function. The setting of the number of hidden neurons is directed with the dimension of the input data vector of the self-encoder, the pseudo inverse matrix of the input data of the self-encoder is cut off, the pseudo inverse cut-off matrix is taken as the encoder connection weight, the input data is mapped to the hidden space via the biased ReLU activation function, and the pseudo inverse solution is solved by a pseudo inverse learning algorithm to further obtain the encoder connection weight. The method for sparse self-encoding neural network fast training is free of the iterative optimization based on gradient descending and is free of setting the control parameters, is fast in calculation speed, and can guarantee the learning of the sample sparsity. The method is easy to control reconstruction errors, easy to use, and facilitates hardware realization.

Description

technical field [0001] The invention relates to a fast training method for a sparse autoencoder in the field of artificial intelligence, in particular to a fast training method for a sparse autoencoder based on a pseudo-inverse learning algorithm. Background technique [0002] At present, in the artificial intelligence technology represented by deep learning, supervised learning is usually used, which often requires a large amount of labeled data to train the deep network model. However, most of the data obtained in practical applications are unlabeled. Data, if manual labeling of a large amount of unlabeled data requires high manpower and time costs. Therefore, using unsupervised learning techniques and methods to directly perform representation learning on unlabeled data and make full use of a large amount of unlabeled data is the development trend of artificial intelligence technology. Autoencoder is a commonly used basic model of deep learning. Its basic idea is that th...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06N3/04G06N3/08
CPCG06N3/08G06N3/044G06N3/045
Inventor 郭平
Owner BEIJING NORMAL UNIVERSITY
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More