Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Representation learning method based on superimposed convolution sparse auto-encoder

A sparse autoencoder and learning method technology, applied in the field of representation learning based on superimposed convolutional sparse autoencoders, can solve the problems of high-dimensional feature representations that are not abstract and robust enough, errors, and model performance degradation

Pending Publication Date: 2020-10-09
YANGZHOU UNIV
View PDF5 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The basic idea of ​​SDA is to learn the reconstructed feature representation of the original data through the encoding layer and the decoding layer. Although its goal is to learn high-level feature representation, compared with the convolutional network, SDA discards the inherent structure of the input data. In order to solve For this problem, there have been some works such as stacked convolutional autoencoder and stacked sparse autoencoder trying to combine SDA with convolutional pooling structure. Coexistence of labeled data, ignoring labeled data leads to unsatisfactory performance of unsupervised deep learning models
[0004] Although the above-mentioned supervised and unsupervised deep learning models have achieved good performance in the field of representation learning and have been applied in various fields, there are still two main problems that hinder the further application of these algorithms. The first problem is The training problem of multi-layer convolutional neural network, although some optimization work has been done in recent years, in semi-supervised or unsupervised neural network, some effective regularization and optimization methods, such as sparse constraints, etc., its performance and performance are still Poor; the second problem is the use of label information and the data redundancy of image data. In actual data sets, a small number of labeled data and a large amount of unlabeled data often coexist. Ignoring labeled data will lead to the overall performance of the model. At the same time, because the adjacent pixels of the image in the local area of ​​​​the image dataset are highly correlated, if the data redundancy problem is not solved, the learned high-dimensional feature representation may not be abstract and robust enough, leading to errors

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Representation learning method based on superimposed convolution sparse auto-encoder
  • Representation learning method based on superimposed convolution sparse auto-encoder
  • Representation learning method based on superimposed convolution sparse auto-encoder

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0067] Such as figure 1 A representation learning method based on stacked convolutional sparse autoencoders is shown, including:

[0068] Step 1) Design and implement a reconstruction independent component analysis algorithm including whitening, and use the image data set as input, iteratively optimize and learn the output reconstruction matrix, and obtain the trained sparse autoencoder model;

[0069] Step 2) building a semi-supervised superposition sparse autoencoder model to train the feature representation;

[0070] Step 3) Build a convolutional model to extract block features from the data, apply convolution and pooling operations to generate convolutional feature representations;

[0071] Step 4) superimposing the convolutional sparse autoencoder to further optimize the convolutional feature representation;

[0072] Step 5) On the basis of the finally learned feature representation, use the logistic regression model to train a classifier on the image data set, and obta...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a representation learning method based on a superimposed convolution sparse auto-encoder. The method is characterized by comprising the following steps of 1) designing and realizing a reconstruction independent component analysis algorithm including whitening, taking an image data set as input, and iteratively optimizing and learning an output reconstruction matrix thereofto obtain a trained sparse auto-encoding model; 2) constructing a semi-supervised superimposed sparse auto-encoder model to train the feature representation; 3) constructing a convolution model to extract block features from the data, and generating convolution feature representation by applying convolution and pooling operations; 4) superposing the convolution sparse auto-encoder, and further optimizing the convolution feature representation; and 5), training a classifier on the image data set by using a logistic regression model and obtaining a classification result. According to the method,the characteristics of the auto-encoder model and the convolution pooling structure are combined, and a small part of labeled data in a large-scale data set is utilized so that feature representationvectors are optimized, and the classification accuracy of the image data set is improved.

Description

technical field [0001] The invention relates to the field of data mining research, in particular to a representation learning method based on a stacked convolution sparse autoencoder. Background technique [0002] Representation learning is an effective method that can learn higher-level and more robust feature vectors from raw data such as image datasets, and can perform machine learning tasks such as classification and prediction based on the learned feature vectors. In recent years, methods based on deep learning have been widely used in representation learning. According to the usage of labels, they can be mainly divided into supervised deep learning and unsupervised deep learning. A typical supervised deep learning model is a convolutional neural network (CNN), which has been widely used in machine learning tasks such as computer vision. The main advantage of CNN has two aspects. First, its deep structure characteristics greatly increase its learning ability, which can...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62G06N20/00
CPCG06N20/00G06F18/2136G06F18/213G06F18/217G06F18/241G06F18/214
Inventor 朱毅李云强继朋袁运浩
Owner YANGZHOU UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products