Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Image fusion method based on guided filtering and online dictionary learning

A technology of dictionary learning and guided filtering, applied in the field of image fusion based on guided filtering and online dictionary learning, can solve the problems of poor robustness and flexibility, policy flexibility, poor adaptability, and low efficiency, and achieve robustness And the effect of strong flexibility, efficient multi-image fusion, and good edge protection characteristics

Inactive Publication Date: 2018-12-21
NAT UNIV OF DEFENSE TECH
View PDF2 Cites 11 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] 1. The image includes low-frequency components and high-frequency components. The low-frequency component has a large amount of background information, but it is not the main area of ​​interest. The high-frequency component reflects the texture details of the image target area, and directly performs sparse representation of the entire image. It will make the processing complexity high, and at the same time increase a large amount of unnecessary data processing, making the fusion time-consuming longer, especially in large-scale dynamic data processing such as video fusion, which will greatly limit the fusion efficiency;
[0007] 2. The over-complete dictionary in sparse representation is the key to better represent the image signal, which directly determines the performance of the image fusion method. For example, the classic K-SVD dictionary learning is compared with fixed-structured dictionaries such as DCT and WT. The training-based dictionary such as K-SVD has a good fusion effect, but even if the robustness and flexibility of the classic K-SVD dictionary are poor, when a new signal is introduced, the dictionary must be re-constructed, which is not suitable for such as video fusion In large-scale dynamic data processing
However, the wavelet decomposition method used in this scheme is easy to cause the Gibbs effect, and the protection of edge details is not enough, and the traditional sliding window strategy is used, which is not efficient and takes a long time, and the low-frequency sub-band matrix learning dictionary strategy is flexible. Poor performance and adaptability, not suitable for large-scale dynamic data processing such as video fusion

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image fusion method based on guided filtering and online dictionary learning
  • Image fusion method based on guided filtering and online dictionary learning
  • Image fusion method based on guided filtering and online dictionary learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0054] The present invention will be further described below in conjunction with the accompanying drawings and specific preferred embodiments, but the protection scope of the present invention is not limited thereby.

[0055] Such as figure 1 , 2 As shown, the steps of the image fusion method based on guided filtering and online dictionary learning in this embodiment include:

[0056] S1. Obtain all source images separately and decompose them based on the guided filtering method, and decompose each source image to obtain low-frequency and high-frequency components;

[0057] S2. Fusion the low-frequency components of the decomposed source images in a comprehensive manner to obtain the fused low-frequency components, and use a fusion method based on sparse representation to fuse the high-frequency components of the decomposed source images to obtain fusion After the high-frequency component, when using the fusion method based on sparse representation, the online robust diction...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an image fusion method based on guiding filter and online dictionary learning. The method comprises the following steps: S1. Obtaining all source images respectively and decomposing the source images based on guiding filter, decomposing each source image to obtain low frequency and high frequency components; S2, the low-frequency components of the decomposed source images are fused in an integrated manner, The fused low frequency components are obtained, and the high frequency components of the decomposed source images are fused by using the fusion method based on sparse representation, and the fused high frequency components are obtained. When the fusion method based on sparse representation is used, the online robust dictionary learning method is used to obtain the dictionary. S3. The obtained fused low-frequency component and the fused high-frequency component are combined to obtain a final fused image. The invention has the advantages of simple realization method, good real-time performance and effect of multi-image fusion, high fusion efficiency, easy realization and the like.

Description

technical field [0001] The invention relates to the technical field of digital image processing, in particular to an image fusion method based on guided filtering and online dictionary learning. Background technique [0002] Since a single sensor image cannot provide enough information, adding information from different sensors can enhance the visibility for the human eye. Multi-source image fusion technology refers to the multiple images collected by multiple sensors about the same scene or target Registration images, or multiple images acquired by the same sensor in different working modes, after certain processing, extract the features of each measurement image and synthesize them into an image to make full use of the redundant and complementary information contained in the image to be fused to obtain more Reliable and more accurate useful information for subsequent observation or further processing. Complementary imaging sensors, including infrared, low-light sensors, a...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T5/50
CPCG06T5/50G06T2207/10016G06T2207/20104
Inventor 彭元喜李俊杨文婧杨绍武黄达宋明辉刘璐邹通周士杰
Owner NAT UNIV OF DEFENSE TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products