Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Laplace spare deep belief network image classification method

A deep belief network, image technology, applied in the fields of deep learning and image processing

Active Publication Date: 2018-11-13
JIANGNAN UNIV
View PDF4 Cites 11 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

But this method needs to set the "sparse target" in advance, and the hidden layer nodes all have the same sparseness in a certain state

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Laplace spare deep belief network image classification method
  • Laplace spare deep belief network image classification method
  • Laplace spare deep belief network image classification method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0079] Such as figure 1 As shown, a sparse deep belief network image classification method based on Laplace function constraints, the specific steps are as follows:

[0080] Step 1. Select an appropriate training image data set, and perform image preprocessing on it to obtain a training data set.

[0081] Since the image classification focuses on the feature extraction process, the color image is converted into a grayscale image through binarization, and the grayscale value is normalized to [0,1], so that only a two-dimensional grayscale The degree-level matrix is ​​used for feature extraction. The specific normalization formula is as follows:

[0082]

[0083] in, is the feature value of the image dataset, x max and x min are the maximum and minimum values ​​of all features of the image dataset, respectively, and x is the normalized image dataset.

[0084] Step 2. Use the preprocessed training data set for pre-training of the LSDBN network model. According to the i...

Embodiment 2

[0156] Example 2: Experiments on the MNIST handwriting database

[0157] The MNIST handwriting data set includes 60,000 training samples and 10,000 test samples, and the size of each picture is 28*28 pixels. In order to facilitate the extraction of image features, the present invention extracts different numbers of images of each category from 60,000 training data for experimental analysis. Among them, the model includes 784 visible layer nodes and 500 hidden layer nodes, the learning rate is set to 1, the batch size is 100, the maximum number of iterations is 100, and the CD algorithm with a step size of 1 is used to train the model.

[0158] Table 1 shows the sparsity measurement results on the MNIST data set in the present invention, and a comparative analysis with the other two sparse models. The sparsity measurement method is as follows:

[0159]

[0160] For sparse models, the higher the sparsity, the higher the algorithm stability and the stronger the robustness. ...

Embodiment 3

[0169] Example 3: Experiments on the Pendigits Handwriting Recognition Dataset

[0170] The Pen-Based Recognition of Handwritten Digits (PenDigits) data set includes 10992 data samples, which are divided into 10 categories, including 7494 training data, 3298 test data, and 16 feature vectors for each sample. Also, different numbers for each class images for analysis. Set the visible layer nodes to 16, the hidden layer nodes to 10, the learning rate to 1, the batch size to 100, and the maximum number of iterations to 1000.

[0171] figure 2 It shows the classification accuracy results of LS-RBM in the present invention on the Pendigits handwriting recognition data set based on the number of different samples of each class. It can be seen that when the number of samples of each class is larger for most algorithms, the classification accuracy rate is also getting higher and higher. high. The LS-RBM algorithm still achieves the best classification accuracy on the PenDigits dat...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a Laplace spare deep belief network image classification method, and belongs to the field of image processing and deep learning. The method comprises the following steps that: firstly, on the basis of inspiration for primate visual cortex analysis, importing a punishment regular term into an unsupervised stage likelihood function, through a Lapalce sparse constraint, obtaining the sparse distribution of a training set while a CD (Contrastive Divergence) algorithm is used for maximizing a target function, and therefore, enabling unlabeled data to learn visual characteristic representation; secondly, putting forward an improved spare deep belief network, using Laplace distribution to induce the spare state of a hidden layer node, and meanwhile, using the scale parameter in the distribution to control spare strength; and finally, using a stochastic gradient descent method to carry out training learning on the parameters of the LSDBN (Laplace Spare Deep Belief Network). By use of the method which is put forwarded by the invention, even if the amount of each category of samples is small, best identification accuracy can be achieved all the time, and in addition, the method exhibits good spare performance.

Description

technical field [0001] The invention relates to the fields of image processing and deep learning, in particular to a Laplace Sparse Deep Belief Network (LSDBN) image classification method based on Laplace function constraints. Background technique [0002] Existing image classification mainly adopts methods based on generative models or discriminative models. These shallow structural models have certain limitations. In the case of limited samples, the expression ability of complex functions is limited, and the generalization ability is restricted to a certain extent. The classification effect of the model is reduced; there are a lot of noise and redundant information in the image data features, which need to be preprocessed, which consumes a lot of time and resources. Therefore, excellent feature extraction algorithms and classification models are an important research direction of image processing. [0003] In recent years, deep learning has developed rapidly. In 2006, Hin...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/62G06N3/04
CPCG06N3/045G06F18/241G06F18/24155G06F18/214
Inventor 宋威李蓓蓓王晨妮
Owner JIANGNAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products