Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A classification method based on deep bag of features

A technology of deep features and classification methods, applied in image analysis, image enhancement, instruments, etc., can solve the problems of long time consumption and large amount of calculation of 3D convolutional neural network, so as to shorten the time, alleviate the problem of overfitting, and reduce the feature dimension. reduced effect

Active Publication Date: 2021-06-15
SOUTHERN MEDICAL UNIVERSITY
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The present invention provides a classification method based on deep feature bags, through which the problems of large amount of calculation and long time consumption of three-dimensional convolutional neural network are improved, and the over-fitting problem caused by less medical image training data is avoided. And improve the classification performance of support vector machine

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A classification method based on deep bag of features
  • A classification method based on deep bag of features
  • A classification method based on deep bag of features

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0040] Such as Figure 1-3 As shown, a classification method based on deep feature bags includes the following steps in turn:

[0041] S1. Collect the 3D image of the target object, outline and segment the region of interest;

[0042] S2. Decompose the three-dimensional image of the region of interest obtained in step S1 into two-dimensional images of three two-dimensional orthogonal planes to obtain three sets of two-dimensional image groups, and select a pixel point from each set of two-dimensional image groups The most two-dimensional images are respectively used as input images of three two-dimensional orthogonal planes;

[0043] S3. Use the convolutional neural network to perform feature extraction on the input images of the three two-dimensional orthogonal planes obtained in step S2, and obtain the depth features of the three two-dimensional orthogonal planes. The depth features of the three two-dimensional orthogonal planes include training Sample depth features and t...

Embodiment 2

[0059] A classification method based on deep feature bags, other features are the same as in Embodiment 1, the difference is that in step S5, the kernel function is a Gaussian kernel, and the Gaussian kernel expression is: where D RBF (x i ,x j ) is the Gaussian distance, V=1 or 2 or 3, representing the cross-section, coronal plane and sagittal plane respectively, that is, ω 1 , ω 2 , ω 3 represent the weighting coefficients of the transverse plane, coronal plane and sagittal plane respectively, Denote the Gaussian distance of the transverse plane, coronal plane and sagittal plane respectively; X i ,X j Represents the i-th and j-th samples in the training sample features; σ is a hyperparameter, which is determined during the training process; k represents the k-th feature in the feature vector; N is the total number of training samples.

[0060] In step S6, the classification function expression is:

[0061]

[0062] where α i Indicates the weight coefficient o...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention proposes a classification method based on depth feature bags, which extracts the depth features of a representative image of a two-dimensional orthogonal plane, then uses the feature bag model to recode the extracted depth features, and finally uses the kernel fusion method Combining the characteristics of three two-dimensional orthogonal planes, the classification function is obtained to classify the image. This classification method greatly reduces the amount of calculation in the training and testing process, saves data space, and shortens the time of the entire calculation process. The "codebook" is used to encode image features to achieve high-dimensional sparse representation, making features more discriminative. It is more stable and more compact, and can alleviate the overfitting problem caused by insufficient data volume. The three two-dimensional orthogonal planes are multiplied by different weight coefficients when calculating the kernel function, which can make better use of the different spatial information contained in the three different sections, and the constructed classification function is more discriminative.

Description

technical field [0001] The invention relates to the technical field of medical image classification and prediction, in particular to a classification method based on deep feature bags. Background technique [0002] In recent years, with the rapid development of image processing technology and machine learning methods, medical image processing has attracted more and more attention. Many studies have shown that based on patient medical images including MRI (magnetic resonance) images, CT (computed tomography) images and PET (positron emission computed tomography) images, etc., methods such as pattern recognition and machine learning can be used to achieve tumor benign Malignant classification, preoperative prediction, and prognosis analysis provide powerful help for clinical decision-making. [0003] Deep learning methods have powerful learning capabilities and have achieved great success in image processing, object detection, and other fields, while convolutional neural netw...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T7/00G06K9/62
CPCG06T7/0012G06T2207/20084G06T2207/20081G06T2207/10104G06T2207/10116G06T2207/10081G06T2207/10088G06T2207/30068G06T2207/30064G06T2207/30081G06F18/23213G06F18/2413
Inventor 张煜罗嘉秀宁振源
Owner SOUTHERN MEDICAL UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products