End-to-end multi-task feature fusion method for Chinese painting classification
A feature fusion and multi-task technology, applied in the direction of instruments, character and pattern recognition, computer components, etc., can solve the problems of information loss and generalization ability, and achieve the effect of improving overfitting and classification accuracy
- Summary
- Abstract
- Description
- Claims
- Application Information
AI Technical Summary
Problems solved by technology
Method used
Image
Examples
Embodiment 1
[0034] The embodiment of the present invention provides an end-to-end multi-task feature fusion method for Chinese painting classification, see figure 1 , the method includes the following steps:
[0035] 1. Multi-task feature fusion (MTFFNet) architecture
[0036] A multi-task feature fusion (MTFFNet) architecture for traditional Chinese painting classification, the proposed model MTFFNet is as follows figure 1 shown.
[0037] It can be seen that the whole network is mainly composed of two task branches, namely RGB image feature learning and stroke feature learning, both of which integrate DenseNet as a backbone network component. The top layer is the RGB image feature learning branch. This task takes the original image of Chinese painting as input and learns high-level semantic information describing painting features from an RGB perspective. The bottom layer is the stroke information learning branch. This task takes the gray level co-occurrence matrix (GLCM) image as inp...
Embodiment 2
[0061] Below in conjunction with concrete experiment, carry out feasibility verification to the scheme in embodiment 1, see the following description for details:
[0062] 1. Experimental settings
[0063] Use deep learning framework tensorflow and keras to realize the model of the present invention. MTFFNet is trained using stochastic gradient descent (SGD) with a batch size of 64 images. According to the settings of AlexNet (ImageNet Classification with Deep Convolutional Neural Networks), the learning rate of the current training iteration number i is set to:
[0064]
[0065] Among them, p is the total number of iterations to ensure the convergence of the model, and p is set to 100, so that when the learning rate has been set and the model is trained, the model can finally converge only when the learning rate decreases over time. Use LIBSVM (ALibrary for Support Vector Machines) toolbox to implement SVM classifier, use Gaussian kernel function and gradient optimizatio...
PUM
Login to View More Abstract
Description
Claims
Application Information
Login to View More 


