Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

MRI image and CT image conversion method and terminal based on deep learning

A CT image and deep learning technology, applied in the field of medical image processing, can solve problems such as inaccurate feature extraction, long inspection time, and poor contrast in soft tissue imaging, so as to avoid redundant model design, improve image conversion efficiency, and avoid noise influence Effect

Pending Publication Date: 2022-04-01
SHENZHEN YINO INTELLIGENCE TECH
View PDF0 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The advantage of MRI images is that there is no ionizing radiation during the scanning process, so it has little impact on the human body, and the image tissue resolution is high, and there is no bone artifact, but its disadvantages are that the examination time is long, the image spatial resolution is low, and when the patient is implanted If there is metal in it, it is not suitable for MRI; the advantage of CT image is that it can provide the density information needed for radiotherapy dose planning, and the spatial resolution is high, and the operation is simple, but its disadvantage is that the contrast of soft tissue imaging is poor, and The existence of ionizing radiation in the scanning process will affect the human body; if CT and MRI are performed at the same time, it will cause a large burden on the patient's body and economy. Therefore, the transformation of MRI images and CT images can be realized so that the two can achieve complementary effects. clinically significant
[0003] In the prior art, a statistical learning method is used to establish a mapping model between the MRI voxel intensity value and the CT gray value, but this model requires artificial extraction of features, and inaccurate feature extraction may easily lead to mistransformation problems

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • MRI image and CT image conversion method and terminal based on deep learning
  • MRI image and CT image conversion method and terminal based on deep learning
  • MRI image and CT image conversion method and terminal based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0085] Please refer to figure 1 , a method for converting MRI images and CT images based on deep learning in this embodiment, including:

[0086] S1. Obtain a training MRI image and a training CT image, and perform N4 offset correction and histogram matching on the training MRI image and the training CT image to obtain a preprocessed training MRI image and a preprocessed training CT image;

[0087] In S1, N4 offset correction and histogram matching are performed on the training MRI image and the training CT image, and the preprocessed training MRI image and the preprocessed training CT image include:

[0088] S11. Obtain a training MRI grayscale image and a training CT grayscale image according to the training MRI image and the training CT image;

[0089] Specifically, set corresponding file paths for the training MRI image source_A and the training CT image source_B respectively, cut the training MRI image source_A and the training CT image source_B into the same size, name th...

Embodiment 2

[0152] Please refer to figure 1 , 3 -6. On the basis of Embodiment 1, this embodiment further defines how to train to obtain the first final full convolutional neural network model and the second final full convolutional neural network model, specifically:

[0153] Step S2 is specifically:

[0154] S21. Input the preprocessed training MRI image into the encoder network of the initial fully convolutional neural network model to obtain a first low-resolution feature map and a first high-resolution feature map;

[0155] Wherein, the encoder network includes a first convolutional layer, a first activation layer, a second convolutional layer, a second activation layer, a maximum pooling layer, a third activation layer and a downsampling layer arranged in sequence;

[0156] Specifically, input the preprocessed training MRI image into the encoder network of the initial full convolutional neural network model, and then pass through the first convolutional layer, the first activation...

Embodiment 3

[0241] The difference between this embodiment and Embodiment 1 or Embodiment 2 is that some specific parameters for training the initial fully convolutional neural network model are further limited:

[0242] Specify the GPU, choose single card or multiple cards, choose according to the image data size, that is, os.environ['CUDA_VISIBLE_DEVICES']=GPU_id;

[0243] The image batch size Batch_size is set to 8;

[0244] The learning rate learning_rate is set to 0.0001 (1e-4);

[0245] The decay factor weight_decay is set to 1e-4;

[0246] The preset training round epoch is set to 600;

[0247] Save the model save_model is set to every 50epoch;

[0248] The error output of training, verification and testing is MAE (Mean Absolute Error, average absolute error), ME (Mean Error, average error), MSE (Mean Squared Error, mean square error) and PCC (PearsonCorrelation Coefficient, Pearson correlation coefficient) , through multiple measures of MAE, ME, MSE and PCC, multiple measuremen...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an MRI image and CT image conversion method and terminal based on deep learning, and the method comprises the steps: obtaining a training MRI image and a training CT image, carrying out the N4 offset correction and histogram matching of the training MRI image and the training CT image, and obtaining a preprocessed training MRI image and a preprocessed training CT image; training and verifying an initial full convolutional neural network model based on the preprocessed training MRI image and the preprocessed training CT image to obtain a final full convolutional neural network model; and obtaining a to-be-converted image, and inputting the to-be-converted image into the final full convolutional neural network model to obtain a composite image corresponding to the to-be-converted image, thereby improving the accuracy of conversion between the MRI image and the CT image.

Description

technical field [0001] The present invention relates to the technical field of medical image processing, in particular to a method and terminal for converting MRI images and CT images based on deep learning. Background technique [0002] Currently, commonly used clinical medical images include MRI (Magnetic Resonance Imaging, magnetic resonance imaging) images and CT (Computed Tomography, computerized tomography) images. The advantage of MRI images is that there is no ionizing radiation during the scanning process, so it has little impact on the human body, and the image tissue resolution is high, and there is no bone artifact, but its disadvantages are that the examination time is long, the image spatial resolution is low, and when the patient is implanted If there is metal in it, it is not suitable for MRI; the advantage of CT image is that it can provide the density information needed for radiotherapy dose planning, and the spatial resolution is high, and the operation is...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06V10/774G06V10/74G06V10/82G06N3/04G06T5/50G06K9/62
Inventor 刘吉平张翔单国平陈炜时建芳王彬冰王俊谢宝文
Owner SHENZHEN YINO INTELLIGENCE TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products