Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Cross-modal MR image mutual generation method based on cyclic generative adversarial network CycleGAN model

A cross-modal, imaging technology, applied in the field of computer vision, can solve the problems of loss of biological tissue structure information, difficult acquisition of MR images, low image quality, etc. Effect

Pending Publication Date: 2021-08-03
FUDAN UNIV +1
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, due to the limitation that GAN can not simulate the texture structure information of biological tissue by using random noise to generate images, the quality of the generated images is still not high, and there are problems such as low fidelity and loss of biological tissue structure information.
Moreover, GAN requires that in the process of model training, the source modality image used as input and the real image of the target modality are paired, so as to minimize the loss function and train the model. In this training process, it is still There is a problem that it is difficult to obtain MR images, and the data that can be used as a training set is relatively scarce

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Cross-modal MR image mutual generation method based on cyclic generative adversarial network CycleGAN model
  • Cross-modal MR image mutual generation method based on cyclic generative adversarial network CycleGAN model
  • Cross-modal MR image mutual generation method based on cyclic generative adversarial network CycleGAN model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0019] In order to make the technical means, creative features, goals and effects of the present invention easy to understand, the following embodiments will specifically illustrate a cross-modal MR image mutual generation method based on the CycleGAN model of the present invention in conjunction with the accompanying drawings.

[0020]

[0021] This embodiment describes in detail the cross-modal MR image mutual generation method based on the CycleGAN model.

[0022] figure 1 It is a schematic structural diagram of the cycle generation confrontation network CycleGAN model in this embodiment.

[0023] Such as figure 1 As shown, the CycleGAN model includes a generator and a discriminator.

[0024] The generator consists of a generator input layer, a convolutional layer, a residual block, and a deconvolutional layer.

[0025] The input to the generator input layer is the source modality MR image. The input source modality MR image in this embodiment is brain T1 weighted MR ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention belongs to the field of computer vision, and provides a cross-modal MR image mutual generation method based on a cyclic generative adversarial network CycleGAN model, which uses the CycleGAN model, and can generate a synthetic image with good approximation to a target modal real image. And by adopting the circulating network structure, the training set of the model can be weakened from limitation of pairwise paired source modal and target modal images to images which do not need to be paired, so that the training difficulty of the model is reduced, and the use scene is wider. Meanwhile, real different-modal MR images can be obtained through the cross-modal MR image mutual generation model; compared with a simple data enhancement method, the synthesized image has better fidelity, more biological tissue structure information is reserved, the synthesized image can serve as training data of models of downstream tasks such as segmentation and classification of the MR images, the effects of training set expansion and data enhancement are achieved, and the problems of high MR image acquisition difficulty and data scarcity can be effectively relieved.

Description

technical field [0001] The invention belongs to the field of computer vision, and in particular relates to a cross-modal MR image mutual generation method based on a cycle generation confrontation network CycleGAN model. Background technique [0002] Magnetic resonance imaging technology is a commonly used and very important disease monitoring technology. Magnetic resonance medical imaging (MR impression) can reflect the images of the side, coronal, sagittal and truncated planes in any direction of human organs and the rich texture of human organs Therefore, MR images are widely used in clinical diagnosis (such as early diagnosis of diseases), surgical simulation, and evaluation of physical properties of biological tissues. As the preferred method for the evaluation of soft tissue lesions, MR images can provide a variety of different contrasts, and provide richer diagnostic information through different contrast images of the same pathology. For example, T1-weighted images ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T5/50G06N3/04G06N3/08
CPCG06T5/50G06N3/084G06T2207/20081G06T2207/20084G06T2207/30016G06T2207/10088G06T2207/20221G06N3/045
Inventor 王润涵冯瑞
Owner FUDAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products