Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-modal medical image fusion method

A medical image and fusion method technology, applied in the field of image fusion to enhance self-learning ability and avoid information loss

Active Publication Date: 2021-08-20
SHANDONG FIRST MEDICAL UNIV & SHANDONG ACADEMY OF MEDICAL SCI
View PDF10 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] Aiming at the problems existing in the prior art, the present invention provides a general feed-forward neural network fusion technology based on feature point prior to solve The technical difficulty of incomplete extraction of useful tiny information from images to be fused and easy loss

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-modal medical image fusion method
  • Multi-modal medical image fusion method
  • Multi-modal medical image fusion method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0032] The present invention will be further described below in conjunction with the drawings and embodiments of the specification.

[0033] Such as Figure 1-3 As shown, the present invention is a new framework (D-ERBFNN) for medical image fusion based on discrete stationary wavelet transform and enhanced radial basis function neural network. Firstly, considering the translation invariance and calculation amount, the discrete stationary wavelet transform is used as a multi-scale transform operator. After performing two-stage wavelet decomposition, 14 subbands are obtained, which respectively represent different information features of the two source images to be fused. In this way, up and down sampling is not involved, and the loss of information is avoided as much as possible. Then, for the corresponding pair of subbands, fully considering the pixel characteristics and the context features between pixels, the feature information accurate to the point level is substituted i...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a multi-modal medical image fusion method, which comprises the following steps of 1) source image decomposition: performing two-stage decomposition discrete stationary wavelet transform on at least two source images, 2) source image fusion: fusing the at least two source images through seven enhanced radial basis function neural networks, and 3) inverse wavelet transform: performing inverse wavelet transform, through the inverse wavelet transform, converting the seven fusion sub-bands into a fusion image. The method has the advantages that through translation of the invariant multi-scale transformation operator, enough direction information is provided, and the algorithm is relatively simple and easy to operate. In order to enhance the self-learning ability of the neural network, in combination with medical image fusion, pixel values, regional energy, pixel gradients and regional average gradients are adopted to form an input layer of the neural network, information of target feature points is accurately extracted, and meanwhile, information loss is avoided.

Description

technical field [0001] The invention relates to the technical field of image fusion, in particular to a multimodal medical image fusion method based on discrete stationary wavelet transform and enhanced radial basis function neural network. Background technique [0002] Image fusion technology is to synthesize different source images into a new image through a certain technology or method. Rather than simply superimposing all image data, the process performs targeted analysis and processing of the target image through one or more algorithms. For multimodal medical image fusion, two (or more) medical images from different imaging devices are combined, and an algorithm is used to analyze the advantages or complementarities of each image, so as to further obtain effective Information is fused. With the development of medical imaging technology, computed tomography (Computed tomography, CT), magnetic resonance imaging (Magnetic resonance imaging, MRI), positron emission tomogr...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T5/50G06N3/04G06N3/08
CPCG06T5/50G06N3/08G06T2207/20221G06N3/045
Inventor 于长斌晁震
Owner SHANDONG FIRST MEDICAL UNIV & SHANDONG ACADEMY OF MEDICAL SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products