Multi-source image fusion method based on discriminant dictionary learning and morphological decomposition

A technology of dictionary learning and morphological components, applied in character and pattern recognition, instruments, computer parts, etc., can solve problems that are not considered in image fusion methods

Active Publication Date: 2018-12-11
KUNMING UNIV OF SCI & TECH
View PDF7 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, this important factor has not been considered in traditional ...

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-source image fusion method based on discriminant dictionary learning and morphological decomposition
  • Multi-source image fusion method based on discriminant dictionary learning and morphological decomposition
  • Multi-source image fusion method based on discriminant dictionary learning and morphological decomposition

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0078] Example 1: Medical Image Fusion

[0079] In the first set of experiments, we first set a group such as figure 2 Fusion experiments are performed on multimodal medical images shown in (a) and (b). in, figure 2 (a) is MR-T1 image, figure 2 (b) is MR-T2 image. Depend on figure 2 (a) and (b) can be regarded as, due to the difference in weights, the MR-T1 and MR-T2 images contain a large amount of complementary information. If these information can be synthesized to produce a fusion image, it will be very useful It is beneficial to doctors' diagnosis, treatment and subsequent image processing, such as image classification, segmentation, and target recognition.

[0080] image 3 (a)-(f) show the fusion results of NSCT, NSCT-SR, Kim's, Zhu-KSVD, ASR and the method proposed in this paper in turn. It can be seen that different fusion methods have different performances in preserving image edge detail information. Among them, the NSCT-based fusion method can effective...

Embodiment 2

[0083] Embodiment 2: Multi-focus image fusion

[0084] In the second set of experiments, we Figure 4 A set of multifocus images shown in (a) and (b) were subjected to fusion experiments. Depend on Figure 4 As shown in (a) and (b), it can be seen that when the lens of the camera is focused on a certain object, the object can be clearly imaged, but the image of objects far away from the focal plane is blurred. But in reality, for some computer vision tasks or image processing tasks such as target segmentation, image classification, target recognition, etc., it is very necessary to obtain an image with clear targets. This problem can usually be solved by multi-focus image fusion. The method proposed in this paper can not only be used to solve the fusion problem of medical images, but also can be used to solve the fusion of multi-focus images.

[0085] Figure 5 (a)-(f) show the visual effect comparison of NSCT, NSCT-SR, Kim’s, Zhu-KSVD, ASR and the fusion results of our me...

Embodiment 3

[0088] Example 3: Fusion of infrared and visible light images

[0089] In the third set of experiments, we used different methods to Figure 6 Infrared and visible light images shown in (a) and (b) were subjected to fusion experiments. in, Figure 6 (a) is an infrared image, Figure 6 (b) is a visible light image. From these source images, it can be seen that visible light images can clearly reflect the background details of the scene, but cannot clearly image thermal targets (such as pedestrians, vehicles); on the contrary, infrared images can clearly reflect thermal targets (such as pedestrians , vehicle), but it cannot clearly image the background without high temperature. In order to obtain an image with clear background and thermal targets, it plays an important role in target tracking, recognition, segmentation and detection.

[0090] The fusion images obtained by different methods are as follows: Figure 7 (a)-(f) shown. in Figure 7 (a) and (b) are the fusion r...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a multi-source image fusion method based on discriminant dictionary learning and morphological component decomposition. To achieve a purpose of cartoon-texture component separation of different morphological structures in a source image, the method transforms an image decomposition problem into an image classification problem, and designs a cartoon texture discrimination dictionary learning model. Considering the fact that image decomposition is not only related to dictionary, but also to decomposition strategy, a new image decomposition model is designed. In this model,the texture component is regarded as noise superposed on the cartoon component of the source image, and the consistency regularity term of non-local mean similarity is introduced to constrain the solution space of sparse coding coefficients. Finally, the coding coefficients of the fusion image are selected according to the maximum l1 norm of the coding coefficients of the corresponding components. The result shows that the method has the better fusion performance both in visual effect and objective index.

Description

technical field [0001] The invention relates to a multi-source image fusion method based on discriminative dictionary learning and morphological component decomposition, and belongs to the technical field of image fusion data processing. Background technique [0002] Due to the singularity of the image information obtained by different sensors, it is difficult to achieve an accurate description of the target. To solve this problem, image fusion technology can be used to synthesize image information about the same scene from different sensors to generate a description about the scene. This description cannot be obtained from a single source of image information. Since this technology can effectively integrate the complementarity of image information obtained by different sensors and provide a more accurate description for the observed object, this technology has been successfully applied to medical imaging, machine vision, remote sensing, security monitoring and other fields...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/62
CPCG06F18/25
Inventor 李华锋严双林王一棠余正涛王红斌
Owner KUNMING UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products