Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Semi-supervised multi-mode nuclear magnetic resonance image synthesis method based on coarse-to-fine learning

An image synthesis and multi-modal technology, applied in the field of image processing, can solve the problems that the supervised learning model cannot achieve the ideal, the waste of useful information, and the difficulty of obtaining paired multi-modal data

Pending Publication Date: 2022-03-11
BEIJING JIAOTONG UNIV
View PDF0 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The disadvantages of this method are: due to various reasons in reality, it is very difficult to obtain paired multimodal data, and there is often a large proportion of unpaired data in the data set. If the unpaired data is directly discarded, it will cause serious problems Waste of useful information, and only using a small amount of paired data for training will lead to the failure of supervised learning models that require a large amount of paired data to achieve the desired effect

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Semi-supervised multi-mode nuclear magnetic resonance image synthesis method based on coarse-to-fine learning
  • Semi-supervised multi-mode nuclear magnetic resonance image synthesis method based on coarse-to-fine learning
  • Semi-supervised multi-mode nuclear magnetic resonance image synthesis method based on coarse-to-fine learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0046] Embodiments of the present invention are described in detail below, examples of which are shown in the drawings, wherein the same or similar reference numerals denote the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the figures are exemplary only for explaining the present invention and should not be construed as limiting the present invention.

[0047] Those skilled in the art will understand that unless otherwise stated, the singular forms "a", "an", "said" and "the" used herein may also include plural forms. It should be further understood that the word "comprising" used in the description of the present invention refers to the presence of said features, integers, steps, operations, elements and / or components, but does not exclude the presence or addition of one or more other features, Integers, steps, operations, elements, components, and / or groups thereof. It will be understoo...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a semi-supervised multi-mode nuclear magnetic resonance image synthesis method based on coarse-to-fine learning. The method comprises the following steps: constructing most of unpaired data and a small part of paired data by using a multi-modal data set of all cases, constructing an MRI synthesis model comprising a generative network and an enhanced network, training the generative network by using all the data, training the enhanced network by using the small part of paired data, and constructing an MRI synthesis model comprising the generative network and the enhanced network; inputting an image of a source mode into the trained generation network, mapping the image of the source mode into a coarse synthesis image of a corresponding target mode by using the learned cross-mode distribution mapping relation, inputting the coarse synthesis image into the trained enhancement network, and performing refined enhancement on the coarse synthesis image of the target mode by the enhancement network to obtain a refined enhancement image of the target mode; and obtaining a cross-modal synthesis MR image of the target modal. The method can be applied to cross-modal medical image synthesis, the existing source modal image of the patient is used for synthesizing the target modal image, and assistance is provided for assisting doctors in disease diagnosis.

Description

technical field [0001] The invention relates to the technical field of image processing, in particular to a semi-supervised multi-mode nuclear magnetic resonance image synthesis method based on coarse-to-fine learning. Background technique [0002] In recent years, with the rapid development of digital information construction such as images, medical care, sensor networks, and multi-sensory devices, the generation and collection of multi-modal data has become more convenient and easier. The so-called multi-modal data refers to the same sample semantic object, there are multiple sources or forms of information, which effectively describe the sample from different angles. For example, when we browse the web, the description of an object on the web page may contain pictures, text, and hyperlinks. Compared with single-modal data, multi-modal data contains more information. Comprehensive consideration of the information contained in multi-modal data allows us to have a deeper un...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T5/50G06N3/04G06N3/08
CPCG06T5/50G06N3/08G06T2207/10081G06T2207/10088G06T2207/10104G06T2207/20081G06T2207/20084G06T2207/30041G06N3/045
Inventor 朱振峰闫琨刘志哲郑帅国圳宇赵耀
Owner BEIJING JIAOTONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products