Breast cancer image recognition method and system based on multi-stage and multi-feature deep fusion

An image recognition and multi-feature technology, applied in the field of information processing, can solve problems such as high complexity, time-consuming cost, easy loss of identification information, etc., to reduce the requirements of machine hardware configuration, reduce time cost and equipment cost, reduce Effect of Model Parameter Quantity

Active Publication Date: 2022-07-05
EAST CHINA JIAOTONG UNIVERSITY
View PDF7 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Its main problem: the pre-trained weights are based on the ImageNet dataset, which does not contain any medical images, so the pre-trained weights are not very helpful for mammography images
The main disadvantage of this method: the neural network contains a large number of parameters, and the modulation parameters take a very long time
[0011] In summary, the deep learning method can improve the diagnostic performance of breast cancer, but it also has key problems such as high requirements on machine hardware, long training time, high complexity and strict requirements on image size when training the model
[0012] In summary, the existing problems in the prior art are: (1) the feature extraction method in the existing mammography image recognition model is complex, has high professional requirements, cannot be universally applied, and the extraction efficiency is not high; In the process, the traditional feature discrimination ability is weak; the accuracy of the traditional mammography image recognition method is not high, and the practical value is affected
The deep learning method training model has high requirements on machine hardware, long training time, high complexity and strict requirements on image size
[0013] (2) The feature fusion method adopted by the existing method is relatively simple, and the complementarity of different features in breast mass recognition is not considered
In addition, the "cross-modal pathological semantics" between features has not been effectively utilized, and high-quality cross-modal discriminant information needs to be further excavated, and high-dimensional features lead to high time complexity of the model;
[0014] (3) In the training process of the neural network, a large number of parameters need to be modulated, and the parameter modulation is difficult, so the model training time is relatively long
The neural network model has extremely high requirements on the hardware configuration of the machine, and it will cost a lot of money to purchase a high-performance server containing multiple GPUs
[0015] (4) The pre-trained neural network model has strict requirements on the size of the input image. For the original mammography image with higher resolution, it is easy to lose the key discriminant information if the image size is greatly reduced, which will affect the final recognition performance.
[0016] (5) Some models based on the image block level need to complete the accurate labeling of the lesion area first, and the cost of high-quality medical image labeling is extremely high
[0017] Difficulty in solving the above technical problems: There are many types of image features. When selecting features, it is necessary to take into account the differences in the angles of different features for visual content description, and also consider that there should be certain complementarity between features; different image features are heterogeneous to each other. The maximum typical correlation between them can be accurately found; it takes a lot of time to use the medical image pre-training neural network model, and the effect of a single deep learning model is not good (see the results in Table 9); the training of the deep learning model Relying on high-performance GPU servers, purchasing high-performance servers requires a lot of equipment costs; if the recognition model is trained based on lesion areas, it needs to obtain accurate labels first, and labeling lesion areas requires a lot of labor costs; the size of mammography images cannot be greatly reduced, which will Seriously affect the visual information in the image, and then interfere with the accurate recognition of the model

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Breast cancer image recognition method and system based on multi-stage and multi-feature deep fusion
  • Breast cancer image recognition method and system based on multi-stage and multi-feature deep fusion
  • Breast cancer image recognition method and system based on multi-stage and multi-feature deep fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0083] Multi-stage multi-feature deep fusion refers to: 1) Using multiple complementary features, such as traditional features and deep learning features, there are four traditional features and three deep learning features. Because the feature extraction methods are different and heterogeneous, they describe mammography images from different visual angles; 2) There are several simple, easy and effective feature fusion methods: feature early fusion, feature fusion and feature post fusion. Each stage contributes to an improvement in the performance of the final breast cancer diagnosis.

[0084] Based on the idea of ​​multi-stage and multi-feature deep fusion, a new DE-Ada* breast cancer diagnosis model is proposed, which includes four parts: image feature extraction, cross-modal pathological semantic mining, multi-feature fusion and breast cancer classification. Gist(G), SIFT(S), HOG(H), LBP(L), VGG16(V), ResNet(R), DenseNet(D) features of images are extracted from the perspect...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention belongs to the technical field of image processing, and discloses a breast cancer image recognition method based on multi-stage and multi-feature deep fusion. DenseNet features; dig deep into the cross-modal pathological semantics contained in different features; perform feature fusion through early fusion, middle fusion, and post fusion; and construct a multi-stage multi-feature fusion model integrating early fusion, middle fusion, and post fusion; To classify, identify, process and output the processing results of breast masses. The invention extracts traditional features and deep learning features of mammography images, deeply mines the cross-modal pathological semantics between different features, and designs a multi-stage multi-feature fusion strategy to complete breast cancer image recognition. At the same time, the dimensions of core features are compressed to improve the real-time efficiency of the diagnostic model.

Description

technical field [0001] The invention belongs to the technical field of information processing, and in particular relates to a breast cancer image recognition method and system based on multi-stage multi-feature deep fusion. Background technique [0002] At present, the commonly used technologies in the industry are as follows: breast cancer is the cancer with the highest incidence in women, and it is also a relatively complex clinically heterogeneous disease with a very high mortality rate. Survival rate and improving the quality of life of patients are of great significance. Computer-aided automatic diagnosis of breast cancer has become a common concern of both academia and industry. Mammography images (X-ray images) can better reflect various abnormalities in breast tissue, so pathologists can make correct diagnosis decisions based on mammography images. However, the size and shape of the lumps in mammography images are diverse, and the density of breast tissue in differe...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06V10/764G06V10/80G06V10/82G06V10/25G06V10/46G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V10/25G06V10/464G06V2201/032G06N3/045G06F18/241G06F18/253
Inventor 李广丽邬任重袁天李传秀张红斌
Owner EAST CHINA JIAOTONG UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products