Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-feature fusion image classification method based on deep learning

A multi-feature fusion and deep learning technology, applied in neural learning methods, image analysis, image enhancement, etc., can solve the problems of inaccurate feature extraction and classification accuracy, and achieve the effect of improving classification accuracy and feature accuracy

Pending Publication Date: 2021-03-12
HANGZHOU DIANZI UNIV
View PDF0 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Since there may be many important features in the picture, traditional feature extraction may not be able to accurately extract all features, which will have a certain impact on the accuracy of classification

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-feature fusion image classification method based on deep learning
  • Multi-feature fusion image classification method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0026] Below in conjunction with accompanying drawing, the present invention will be further explained;

[0027] The hardware environment of this embodiment is 8vCPU / 64G memory, the GPU is V100, and the software operating environment is CUDA:9.2.148, python3.7, pytorch 1.0.1.post2.

[0028] Such as figure 1 As shown, the classification steps of a multi-feature fusion image classification method based on deep learning are as follows:

[0029] Step 1. Divide the collected digital pathological images of eye tumors into a training set, a verification set and a test set, and each set contains three types of samples: early stage, middle stage, and late stage.

[0030] Step 2. Normalize the pictures in the training set, verification set and test set and then cut them to 224*224, and randomly flip the pictures in the training set horizontally, vertically, and modify the brightness according to the probability P1=0.5 Flips the image horizontally.

[0031] Step 3, establish as figu...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a multi-feature fusion image classification method based on deep learning. The method specifically comprises the steps of data set division, data enhancement, classification network model construction, model initialization and model training optimization. And the data enhancement part is used for enhancing data characteristics by randomly carrying out operations such as horizontal overturning, vertical overturning, brightness modification and horizontal overturning according to probability on the picture. In the construction process of the classification network model,the features extracted for the first time are randomly covered and then extracted again, and then the features extracted for the two times are fused, so that the features are diversified, and the classification accuracy is improved. The system can be used for classifying eye malignant tumor images, positioning lesion areas in the images as feature areas, giving out probability values of lesion types and assisting film reading doctors in judgment.

Description

technical field [0001] The invention belongs to the field of artificial intelligence, and in particular relates to an image classification method based on deep learning. Background technique [0002] With the development of deep learning, the technology of using neural networks to classify pictures has become mature. Traditional neural networks use convolutional neural networks to extract features from pictures. Since there may be many important features in the picture, traditional feature extraction may not be able to accurately extract all the features, which will have a certain impact on the accuracy of classification. Use the attention mechanism to extract some noteworthy features, and randomly cover one of the features, and add the processed features to the original image features. Feature addition can ignore some features and emphasize other more important features. Important features, so that the training of the neural network can capture as many features as possible...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/62G06N3/04G06N3/08G06T7/00G06T7/11
CPCG06N3/084G06T7/11G06T7/0012G06T2207/20081G06T2207/20084G06N3/045G06F18/24G06F18/253G06F18/214Y02T10/40
Inventor 岳雪颖田泽坤孙玲玲
Owner HANGZHOU DIANZI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products