Medical image automatic segmentation method based on multi-path attention fusion

A technology for automatic segmentation of medical images, applied in image analysis, neural learning methods, image enhancement, etc., can solve problems such as difficulty in preserving spatial information, encoder loses spatial information, and affects segmentation results, so as to improve feature quality and increase image quality. Quantity, good accuracy effect

Active Publication Date: 2020-09-18
CHONGQING UNIV OF POSTS & TELECOMM
View PDF5 Cites 34 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, in the hierarchical transformation in the U-Net network, the learning process of different pooling levels often shares the same data path, so the generated multi-scale feature map may not be fully distinguished as expected. Due to the existence of the pooling layer, the encoder Losing part of the spatial information, the single U-Net network uses simple two consecutive

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Medical image automatic segmentation method based on multi-path attention fusion
  • Medical image automatic segmentation method based on multi-path attention fusion
  • Medical image automatic segmentation method based on multi-path attention fusion

Examples

Experimental program
Comparison scheme
Effect test

Example Embodiment

[0058] Example 1

[0059] The pictures in the medical image data set are divided into a training set and a verification set. The training set is used to train the model, and the verification set is used to optimize various indicators of the model. For medical image segmentation, it is not easy to obtain sufficient training samples, so the present invention Augment the pictures in the training set. The augmentation operations include:

[0060] Rotate the pictures in the training set, the rotation angles include 10°, 20°, -10° and -20°, and save the rotated picture;

[0061] Turn the pictures in the training set upside down and left and right, and save the flipped pictures;

[0062] Perform elastic transformation on the pictures in the training set, and save the pictures after the elastic transformation;

[0063] Perform (20%, 80%) range zoom processing on the pictures in the training set, and save the zoomed pictures;

[0064] The pictures in the training set and the pictures in the trai...

Example Embodiment

[0082] Example 2

[0083] Using the separation method in Example 1, in this implementation, the Keras and Tensorflow open source deep learning libraries are used, the NIVIDIA Geforce RTX-2080Ti GPU is used for training, the Adam optimization algorithm model is used, and the learning rate is set to 0.0001; the 2018ISIC skin is used Cancer lesion segmentation, LUNA lung CT data set.

[0084] A data set of this embodiment is provided by the 2018 Skin Cancer Lesion Segmentation Challenge. It contains a total of 2954 skin cancer lesion pictures. Each picture has a size of 700×900 and has a corresponding segmentation label map; use 1815 One picture is used as the training set, 59 pictures are used as the verification set, and the remaining 520 pictures are used as the test set. In order to facilitate network training, all pictures are adjusted to 256×256 in size. The data in the test set is as follows Figure 4 Shown, where Figure 4 The first row is the original image data, the second r...

Example Embodiment

[0089] Example 3

[0090] The separation method in Example 1 is used. Unlike Example 2, this example uses the LUNA data set, which is provided by the 2017 Kaggle Lung Node Competition. It contains a total of 730 pictures and 730 corresponding segmentation label maps. The pixel size of each picture is 512×512. 70% of the pictures are used as the training set, 10% of the pictures are used as the verification set, and the remaining 20% ​​of the data As a test set.

[0091] Due to the small amount of data, techniques such as rotation, flipping, and elastic transformation are used to augment the training data set, so that the network can have good robustness and segmentation accuracy.

[0092] Four evaluation indicators are used, F1-score, Accuracy, Sensitivity and Specificity. The larger the four indicators, the more accurate the segmentation effect. As can be seen from Table 2, the experimental results on the LUNA data set show that compared with U-Net, R2-Unet, BCD-Net and U-Net++, ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention belongs to the technical field of medical image processing and computer vision, in particular to a medical image automatic segmentation method based on multi-path attention fusion, whichcomprises the following steps: obtaining a medical image data set, dividing the data set into a training set and a verification set, augmenting images in the training set, and normalizing images of the verification set and the augmented images in the training set; inputting the pictures in the training set into a multi-path attention fusion network model, and outputting under the guidance of a cross entropy loss function to obtain a segmentation result picture; selecting a model with the highest accuracy of the verification set, inputting the test set into a multi-path attention fusion network loading the model, and outputting to obtain a segmentation result graph of the image. According to the method, the problems that in the medical image segmentation process, an existing network cannoteffectively improve the feature quality under different scales through an encoder, interlayer dependence between network low-level structure features and high-level semantic features is difficult tocontrol, and consequently the segmentation result is poor are solved.

Description

Technical field [0001] The invention belongs to the technical field of medical image processing and computer vision, and particularly relates to an automatic segmentation method for medical images based on multi-path attention fusion. Background technique [0002] Medical images play a key role in medical treatment and diagnosis. The goal of a computer-aided diagnosis (CAD) system is to provide doctors with accurate interpretation of medical images, so that a large number of patients can be better treated. Moreover, the automatic processing of medical images leads to a reduction in the time, cost and errors of human-based processing. One of the main research areas in this field is medical image segmentation, which is a key step in many medical imaging research. [0003] Like other search fields in computer vision, deep learning networks have achieved excellent results and outperform non-deep technologies in medical imaging. Deep neural networks are mainly used for classification...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T7/11G06N3/04G06N3/08
CPCG06T7/11G06T2207/10081G06T2207/20081G06T2207/20084G06T2207/30088G06T2207/30061G06N3/08G06N3/048G06N3/045
Inventor 舒禹程张晶肖斌李伟生
Owner CHONGQING UNIV OF POSTS & TELECOMM
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products