Deep-learning-based multi-mode medical image non-rigid registration method and system

A non-rigid registration, medical image technology, applied in the field of non-rigid multi-modal medical image registration, can solve the problem of low registration accuracy of non-rigid multi-modal medical images

Active Publication Date: 2018-08-17
HUAZHONG UNIV OF SCI & TECH
View PDF3 Cites 21 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] In view of the above defects or improvement needs of the prior art, the present invention provides a method and system for non-rigid registration of multi-mod

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Deep-learning-based multi-mode medical image non-rigid registration method and system
  • Deep-learning-based multi-mode medical image non-rigid registration method and system
  • Deep-learning-based multi-mode medical image non-rigid registration method and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0090] Step 1 trains the PCANet network. Input N medical images For each pixel of each image, take k without interval 1 ×k 2 The block; vectorize the obtained block and perform de-average. Combining all the resulting vectors together will result in a matrix. Calculate the eigenvector of this matrix, and sort the eigenvalues ​​from large to small, and take the first L 1 The eigenvectors corresponding to the eigenvalues. Will L 1 The eigenvectors are matrixed, and the L of the first layer will be obtained 1 convolution template. Convolving the convolution template with the input image will result in NL 1 images. will this NL 1 The image is input into the second layer PCANet, according to the processing method of the first layer, we will get the L of the second layer PCANet 2 Convolution templates, and get NL 1 L 2 images.

[0091] Step 2 Obtain a PCANet-based structural representation (PCANet-based structural representation, PSR for short) according to the PCANet ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a deep-learning-based multi-mode medical image non-rigid registration method and system. The registration method comprises: on the basis of lots of medical data, a PCANet is trained; a floating image and a reference image are inputted into the trained PCANet to obtain structural representation maps of the floating image and the reference image; and according to the structural representation maps of the floating image and the reference image, a registration image is obtained. According to the invention, on the basis of the PCANet deep learning network, the structural representation map of the image is constructed and the registration problem of the non-rigid multi-modal medical image is transformed into a single-mode medical image registration problem, so that the accuracy and robustness of non-rigid multi-mode medical image registration are improved substantially.

Description

technical field [0001] The invention belongs to the field of image registration in image processing and analysis, and more specifically relates to a non-rigid multi-mode medical image registration method and system. Background technique [0002] Non-rigid multimodal medical image registration is important for medical image analysis and clinical research. Due to the different principles of various imaging technologies, each has its own advantages in reflecting the information of the human body. Computed tomography (CT), magnetic resonance imaging (MRI), and ultrasound imaging can reveal anatomical information about organs. As a functional imaging modality, positron emission tomography (PET) can reveal metabolic information, but cannot clearly provide anatomical information of organs. Multimodal image fusion technology can combine the information of different modal images, so as to obtain more accurate diagnosis and better treatment. [0003] The purpose of image registrati...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T7/33G06N3/04
CPCG06T7/33G06T2207/10081G06T2207/10088G06N3/045
Inventor 张旭明朱星星
Owner HUAZHONG UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products