Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Deep Learning Neural Network Model System for Multimodal Image Synthesis

A neural network model and deep neural network technology, applied in the field of deep learning neural network model system, can solve the problems of lack of sCT image structure fidelity, and achieve the effect of high HU accuracy and good fidelity

Active Publication Date: 2021-10-01
PEKING UNIV THIRD HOSPITAL
View PDF17 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Second, the above network lacks constraints to improve the structural fidelity of sCT images

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Deep Learning Neural Network Model System for Multimodal Image Synthesis
  • A Deep Learning Neural Network Model System for Multimodal Image Synthesis

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0023] Hereinafter, the present invention is described in more detail to facilitate understanding of the present invention.

[0024] Such as figure 1 with figure 2 As shown, the deep learning neural network model system for multimodal image synthesis described in the present invention includes a multi-resolution residual deep neural network formed by combining a residual deep neural network (RDNN) with a multi-resolution optimization strategy; The residual deep neural network described above includes 15 convolutional layers, and each convolutional layer can be expressed in the form of (k´k) conv, n, where k represents the size of the convolution kernel, and n represents the size of the convolution kernel number. In order to avoid network overfitting (overfitting), 7 dropout layers are added to the residual deep neural network (20% dropout rate); 7 batch normalization layers are used to normalize the input of the corresponding convolution kernel , thereby stabilizing the ne...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a deep learning neural network model system for multimodal image synthesis, including a multi-resolution residual deep neural network formed by combining a residual deep neural network (RDNN) with a multi-resolution optimization strategy; RDNN includes a A convolution layer, B drop-off layer, C batch normalization (Batch Normalization) layer and D long-term residual connections; among them, the convolution layer is used to extract image features; the drop-off layer is used to avoid network overfitting ; The batch normalization layer is used to normalize the input of the corresponding convolution kernel; the long-term residual connection is used to preserve the structural information in the input image; both sides of each shedding layer are provided with convolutional layers, and each shedding layer The convolution layers adjacent to both sides are connected; a convolution layer is set between each drop-off layer and the batch normalization layer; the drop-off layer, the convolution layer and the batch normalization layer are connected in sequence; each One end of a long-term residual connection is connected between the convolutional layer and the batch normalization layer; the other end is connected between another set of convolutional layers and the batch normalization layer.

Description

technical field [0001] The invention relates to the technical field of medical image processing and guided treatment, in particular to a deep learning neural network model system for multimodal image synthesis. Background technique [0002] Adaptive Radio Therapy (ART) technology based on CBCT images can increase the irradiation of radiation to the tumor, and at the same time, protect the organs at risk near the tumor. However, the HU accuracy of CBCT images in the prior art is low, the resolution of soft tissues is low, and there are serious artifacts. Therefore, in order to realize adaptive radiotherapy (ART), synthetic CT (sCT) images with high HU accuracy and structural fidelity based on CBCT images are firstly required. [0003] U-Net and some other deep learning networks have been widely used in the task of sCT image generation. The study showed that the HU accuracy of the generated sCT images was significantly improved, and dose calculations could be performed based...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T3/40G06T11/00G06N3/04
CPCG06T3/4046G06T11/008G06N3/045
Inventor 武王将杨瑞杰庄洪卿王皓
Owner PEKING UNIV THIRD HOSPITAL
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products