Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Sparse Deep Belief Network Image Classification Method Based on Laplace Function Constraints

A deep belief network and image technology, applied in the fields of deep learning and image processing, to achieve the effect of strong feature extraction ability

Active Publication Date: 2022-05-13
JIANGNAN UNIV
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

But this method needs to set the "sparse target" in advance, and the hidden layer nodes all have the same sparseness in a certain state

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Sparse Deep Belief Network Image Classification Method Based on Laplace Function Constraints
  • A Sparse Deep Belief Network Image Classification Method Based on Laplace Function Constraints
  • A Sparse Deep Belief Network Image Classification Method Based on Laplace Function Constraints

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0079] like figure 1 As shown, a sparse deep belief network image classification method based on Laplace function constraints, the specific steps are as follows:

[0080] Step 1. Select an appropriate training image data set, and perform image preprocessing on it to obtain a training data set.

[0081] Since the image classification focuses on the feature extraction process, the color image is converted into a grayscale image through binarization, and the grayscale value is normalized to [0,1], so that only a two-dimensional grayscale The degree-level matrix is ​​used for feature extraction. The specific normalization formula is as follows:

[0082]

[0083] in, is the feature value of the image dataset, x max and x min are the maximum and minimum values ​​of all features of the image dataset, respectively, and x is the normalized image dataset.

[0084] Step 2. Use the preprocessed training data set for pre-training of the LSDBN network model. According to the inpu...

Embodiment 2

[0156] Example 2: Experiments on the MNIST handwriting database

[0157] The MNIST handwriting data set includes 60,000 training samples and 10,000 test samples, and the size of each picture is 28*28 pixels. In order to facilitate the extraction of image features, the present invention extracts different numbers of images of each category from 60,000 training data for experimental analysis. Among them, the model includes 784 visible layer nodes and 500 hidden layer nodes, the learning rate is set to 1, the batch size is 100, the maximum number of iterations is 100, and the CD algorithm with a step size of 1 is used to train the model.

[0158] Table 1 shows the sparsity measurement results on the MNIST data set in the present invention, and a comparative analysis with the other two sparse models. The sparsity measurement method is as follows:

[0159]

[0160] For sparse models, the higher the sparsity, the higher the algorithm stability and the stronger the robustness. ...

Embodiment 3

[0169] Example 3: Experiments on the Pendigits Handwriting Recognition Dataset

[0170] The Pen-Based Recognition of Handwritten Digits (PenDigits) data set includes 10992 data samples, which are divided into 10 categories, including 7494 training data, 3298 test data, and 16 feature vectors for each sample. Also, different numbers for each class images for analysis. Set the visible layer nodes to 16, the hidden layer nodes to 10, the learning rate to 1, the batch size to 100, and the maximum number of iterations to 1000.

[0171] figure 2 It shows the classification accuracy results of LS-RBM in the present invention on the Pendigits handwriting recognition data set based on the number of different samples of each class. It can be seen that when the number of samples of each class is larger for most algorithms, the classification accuracy rate is also getting higher and higher. high. The LS-RBM algorithm still achieves the best classification accuracy on the PenDigits dat...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a sparse deep belief network image classification method based on Laplace function constraints, which belongs to the field of image processing and deep learning. Based on the inspiration of primate visual cortex analysis, this method first introduces a penalty regular term in the likelihood function of the unsupervised stage, uses the CD algorithm to maximize the objective function, and obtains the sparse distribution of the training set through the Lapalce sparse constraint, which can make Intuitive feature representations are learned from unlabeled data. Secondly, an improved sparse deep belief network is proposed, which uses the Laplacian distribution to induce the sparse state of the hidden layer nodes, and at the same time, the scale parameter in the distribution is used to control the sparseness. Finally, the stochastic gradient descent method is used to train and learn the parameters of the LSDBN network. The method proposed in the present invention always achieves the best recognition accuracy even when each class has few samples, and has good sparse performance.

Description

technical field [0001] The invention relates to the fields of image processing and deep learning, in particular to a Laplace Sparse Deep Belief Network (LSDBN) image classification method based on Laplace function constraints. Background technique [0002] Existing image classification mainly adopts methods based on generative models or discriminative models. These shallow structural models have certain limitations. In the case of limited samples, the expression ability of complex functions is limited, and the generalization ability is restricted to a certain extent. The classification effect of the model is reduced; there are a lot of noise and redundant information in the image data features, which need to be preprocessed, which consumes a lot of time and resources. Therefore, excellent feature extraction algorithms and classification models are an important research direction of image processing. [0003] In recent years, deep learning has developed rapidly. In 2006, Hin...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/62G06V10/774G06V10/764G06V10/82G06N3/04
CPCG06N3/045G06F18/241G06F18/24155G06F18/214
Inventor 宋威李蓓蓓王晨妮
Owner JIANGNAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products