Unlock instant, AI-driven research and patent intelligence for your innovation.

2.5 D medical image segmentation method based on generative adversarial U-Net network

A medical image, generative technology, applied in image analysis, neural learning methods, biological neural network models, etc., can solve problems such as large training sets, and achieve the effect of improving accuracy, reducing volume, and reducing workload

Pending Publication Date: 2022-08-02
CHONGQING JIAOTONG UNIVERSITY
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although researchers have proposed a series of improvement methods based on U-Net, such as replacing the simple convolutional layer with a deep residual network or hole convolution, fusing different loss functions, etc., although the segmentation accuracy has been improved to a certain extent, but still not ideal
In addition, the existing segmentation network model requires a large training set during the training process, and in order to ensure the training effect, the training set needs to be manually produced by relevant medical experts. Therefore, the existing 2.5D segmentation technology, Requires a lot of time and effort from medical experts to create a training set

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • 2.5 D medical image segmentation method based on generative adversarial U-Net network
  • 2.5 D medical image segmentation method based on generative adversarial U-Net network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0038] like figure 1 , figure 2 As shown, this embodiment discloses a 2.5D medical image segmentation method based on generative confrontation U-Net network, including the following steps:

[0039] Step 1. Obtain the 3D medical image to be segmented.

[0040] Step 2: Continuously slice the 3D medical image along multiple axes to obtain 2D slice image groups in each axis. During specific implementation, the slicing operation can be automatically completed by a graphics slicing program. Specifically, the multiple axes include sagittal axis, coronal axis, and vertical axis directions; the 2D slice image groups in each axis include coronal, sagittal, and transverse 2D slice image groups. And preprocess the images in the 2D slice image group to improve the image quality. Specifically, the content of preprocessing includes noise removal and window adjustment. Among them, removing noise includes removing the data whose CT intensity exceeds the range of (-1024, 1024); window ad...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to the technical field of medical image segmentation, in particular to a 2.5 D medical image segmentation method based on a generative adversarial U-Net network, and the method comprises the following steps: 1, obtaining a 3D medical image to be segmented; step 2, continuously slicing the 3D medical image along a plurality of axial directions to obtain a 2D slice image group of each axial direction; step 3, inputting images in each axial 2D slice image group into the corresponding axial segmentation network model to obtain a corresponding axial prediction segmentation image; wherein the segmentation network model comprises a generative adversarial network GAN and a U-net model, and the U-net model is used as a generator of the generative adversarial network GAN; and step 4, stacking the prediction segmentation images in each axial direction to obtain a 3D prediction image in the corresponding axial direction. According to the method, the number of training sets can be effectively reduced, and the workload of medical experts is reduced; and the segmentation precision can be improved.

Description

technical field [0001] The invention relates to the technical field of medical image segmentation, in particular to a 2.5D medical image segmentation method based on a generative confrontation U-Net network. Background technique [0002] Medical images are an important basis for disease diagnosis. Traditional medical image segmentation is usually done manually by medical experts, which is time-consuming, requires high segmentation accuracy, and is affected by subjective and environmental factors. It relies heavily on the experience of doctors. With the increasing amount of image readings, automatic segmentation methods combined with deep learning methods have emerged to relieve the pressure of image segmentation for doctors. [0003] At present, the mainstream automatic segmentation methods are basically only applicable to 2D medical images. The automatic segmentation technology of 3D medical images, although there are many systematic researches, the segmentation effect is ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/194G06T7/155G06N3/04G06N3/08
CPCG06T7/194G06T7/155G06N3/08G06T2207/10081G06T2207/20081G06T2207/20084G06T2207/30168G06T2210/41G06N3/045
Inventor 蓝章礼黄林李芷汀
Owner CHONGQING JIAOTONG UNIVERSITY