Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Deep learning neural network model system for multi-modal image synthesis

A neural network model and deep neural network technology, applied in the field of deep learning neural network model system, can solve the problems of lack of sCT image structure fidelity, and achieve the effect of high HU accuracy and good fidelity

Active Publication Date: 2021-08-13
PEKING UNIV THIRD HOSPITAL
View PDF17 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Second, the above network lacks constraints to improve the structural fidelity of sCT images

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Deep learning neural network model system for multi-modal image synthesis
  • Deep learning neural network model system for multi-modal image synthesis

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0023] Hereinafter, the present invention is described in more detail to facilitate understanding of the present invention.

[0024] Such as figure 1 and figure 2 As shown, the deep learning neural network model system for multimodal image synthesis described in the present invention includes a multi-resolution residual deep neural network formed by combining a residual deep neural network (RDNN) with a multi-resolution optimization strategy; The residual deep neural network described above includes 15 convolutional layers, and each convolutional layer can be expressed in the form of (k´k) conv, n, where k represents the size of the convolution kernel, and n represents the size of the convolution kernel number. In order to avoid network overfitting (overfitting), 7 dropout layers are added to the residual deep neural network (20% dropout rate); 7 batch normalization layers are used to normalize the input of the corresponding convolution kernel , thereby stabilizing the net...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a deep learning neural network model system for multi-modal image synthesis. The deep learning neural network model system comprises: a multi-resolution residual deep neural network formed by combining a residual deep neural network (RDNN) and a multi-resolution optimization strategy; an RDNN, wherein the RDNN comprises A convolution layers, B falling layers, C batch Normalization layers and D long-term residual error connections, wherein the convolutional layer is used for extracting image features; a falling layer for avoiding network overfitting; and a batch normalization layer for standardizing the input of the corresponding convolution kernel. The long-term residual connections are used for retaining structure information in the input image; convolutional layers are arranged on the two sides of each falling-off layer, and each falling-off layer is connected with the adjacent convolutional layers on the two sides; one convolution layer is arranged between each falling layer and the corresponding batch normalization layer; the falling layers, the convolution layers and the batch normalization layers are connected in sequence; one end of each long-term residual connection is connected between the corresponding convolution layer and the corresponding batch normalization layer; and the other end is connected between the other group of corresponding convolutional layers and the corresponding batch normalization layer.

Description

technical field [0001] The invention relates to the technical field of medical image processing and guided treatment, in particular to a deep learning neural network model system for multimodal image synthesis. Background technique [0002] Adaptive Radio Therapy (ART) technology based on CBCT images can increase the irradiation of radiation to the tumor, and at the same time, protect the organs at risk near the tumor. However, the HU accuracy of CBCT images in the prior art is low, the resolution of soft tissues is low, and there are serious artifacts. Therefore, in order to realize adaptive radiotherapy (ART), synthetic CT (sCT) images with high HU accuracy and structural fidelity based on CBCT images are firstly required. [0003] U-Net and some other deep learning networks have been widely used in the task of sCT image generation. The study showed that the HU accuracy of the generated sCT images was significantly improved, and dose calculations could be performed based...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T3/40G06T11/00G06N3/04
CPCG06T3/4046G06T11/008G06N3/045
Inventor 武王将杨瑞杰庄洪卿王皓
Owner PEKING UNIV THIRD HOSPITAL
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products