Prostate image segmentation method

An image segmentation and prostate technology, applied in the field of medical images, can solve the problems of indistinguishable borders, dependence, long algorithm time consumption, etc., and achieve the effect of fast running speed and good robustness

Inactive Publication Date: 2018-11-06
BEIJING L H H MEDICAL SCI DEV
View PDF2 Cites 11 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, this process currently mainly relies on manual work by physicians, which is a very time-consuming process, and the results of segmentation vary from person to person.
In the past few decades, some automatic prostate image segmentation algorithms have achieved certain results, but the effect is limited. Due to the following reasons, prostate magnetic resonance image segmentation is still a very challenging task: 1) the relationship between prostate tissue and other surrounding tissues The contrast is low, and their borders are difficult to distinguish; 2) In one MRI image, the area belonging to the prostate tissue is very small, and less effective information can be obtained; 3) Longer algorithm time consumption may delay clinical diagnosis

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Prostate image segmentation method
  • Prostate image segmentation method
  • Prostate image segmentation method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0063] Embodiment 1 An image segmentation method based on a traditional convolutional neural network (CNN) consists of a convolutional layer, a pooling layer, a fully connected layer and a softmax classifier layer. as attached figure 2 , after the image undergoes a series of convolution, pooling and full connection, the output feature vector can accurately identify the image category.

[0064] Layer l convolution feature map h l The calculation method is:

[0065]

[0066] In the formula, Mx and My represent the length and width of the convolution filter M, respectively, and w jk is the weight learned in the convolution kernel, h l-1 Indicates the input of the convolutional layer l, bl Represents the bias of the l-th layer filter, f(·) is the activation function. The popular deep neural network mostly uses the ReLu activation function instead of the traditional Sigmoid function to accelerate network convergence. Its mathematical expression is:

[0067] f(x)=max(0,x) ...

Embodiment 2

[0071] Embodiment 2 The image segmentation method based on the fully convolutional neural network (FCN) is based on the CNN classification network, as attached image 3 , convert the fully connected layer into a convolutional layer to preserve the spatial two-dimensional information, then deconvolve the convolutional two-dimensional feature map to restore the original image size, and finally obtain each pixel category by pixel-by-pixel classification, so as to achieve Image segmentation purpose.

[0072] The present invention can adopt the idea of ​​fuzzy sets based on degree of membership to classify, adopt

[0073] e is the natural logarithm, and a and c are parameters. π-type functions can be defined by sigmoid functions.

[0074] From the perspective of pixel classification, the standard S-shaped function conforms to the transition process of the prostate image edge, so the invention point of this embodiment is to use the S-shaped function as the basic transformation f...

Embodiment 3

[0091] Embodiment 3 The present invention is based on the two-dimensional image segmentation method of the improved full convolutional neural network (FCN), as attached Figure 4 , follow the steps below:

[0092] S1. Obtain the training samples of the prostate region and mark them;

[0093] S11. Use a 1.5T magnetic resonance system and an 8-channel phased array to perform spin echo single-shot EPI imaging on the training prostate. The imaging parameters are: TR 4800-5000ms, TE 102ms, slice thickness 3.0mm, slice distance 0.5 mm, echo chain length 24, phase 256, frequency 288, NEX 4.0, bandwidth 31.255kHz, pixel 512×512;

[0094] S12. Workers use MITK software to manually segment the image.

[0095] S2. Perform preprocessing on the training prostate area to obtain a preprocessing result;

[0096] S21. Calculate the average intensity value and standard deviation of all training images;

[0097] S22. Performed a normalization operation, including subtracting the mean and div...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a prostate image segmentation method. The method comprises the steps that S1, prostate region training samples are acquired and marked; S2, a training prostate region is preprocessed to obtain a preprocessing result; S3, a full-convolution network structure for prostate region-of-interest segmentation is constructed; S4, the training samples are utilized to train a prostatesegmentation model so as to acquire an optimal prostate image segmentation model; S5, a prostate region sample of an object is acquired and marked; S6, a testing prostate region is preprocessed to obtain a preprocessing result; S7, the trained segmentation model is used to segment a test set; S8, segmentation results of the full-convolution network are post-processed; and S9, evaluation indexes for image segmentation are selected to perform statistical evaluation on the segmentation results. Through the method, pixel classification precision is improved, and the method has scale invariance, ishigh in segmentation speed and has a good application prospect.

Description

technical field [0001] The invention relates to the field of medical images, in particular to the segmentation of the prostate in the medical images. Background technique [0002] Prostate cancer is one of the major health problems in older men. Studies have found that the incidence of chronic prostatitis in men is as high as 2.5% to 16%, and it is the second largest cancer that causes death in men. The diagnosis of prostate disease has always been the focus of imaging research. Currently, common prostate imaging diagnosis and treatment methods include rectal ultrasonography (TRUS), computed tomography (CT) and magnetic resonance imaging (MRI). Compared with other imaging methods, the image quality of magnetic resonance imaging is clearer in distinguishing the anatomical regions of the prostate and more sensitive to diseased tissue. Therefore, magnetic resonance imaging is recognized as the most effective method for diagnosing cancerous prostate and plays an important role...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/11G06T7/136
CPCG06T7/11G06T7/136G06T2207/30081G06T2207/10088G06T2207/20084G06T2207/20081
Inventor 叶慧
Owner BEIJING L H H MEDICAL SCI DEV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products