Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Image fusion method based on shift-invariant shearlets and stack autoencoder

A stack-type auto-encoding, translation-invariant technology, applied in the field of image fusion based on translation-invariant shear wave and stack-type auto-encoding, to eliminate pseudo-Gibbs phenomenon, strong local contrast, and protect edge and texture information. Effect

Inactive Publication Date: 2017-06-27
JIANGNAN UNIV
View PDF3 Cites 15 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In recent years, the application of deep learning methods in the field of computer vision has achieved great success. Deep learning methods are good at discovering hierarchical features from complex data sets, but due to the lack of sufficient labeled features in the practical application of image fusion Supervised learning methods such as convolutional neural network (CNN) have not yet been applied in image fusion, while stacked autoencoder (SAE), as a deep neural network for unsupervised learning, meets the requirements of image fusion applications. scene

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image fusion method based on shift-invariant shearlets and stack autoencoder
  • Image fusion method based on shift-invariant shearlets and stack autoencoder
  • Image fusion method based on shift-invariant shearlets and stack autoencoder

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0030] The embodiments of the present invention will be described in detail below in conjunction with the accompanying drawings. This embodiment is carried out on the premise of the technical solution of the present invention, such as figure 1As shown, the detailed implementation and specific operation steps are as follows:

[0031] Step 1, use SIST transformation to decompose the two multi-focus images to be fused, and obtain low-frequency subbands after image decomposition and high frequency subband Among them, "maxflat" is selected for the scale decomposition LP, "pmaxflat" is selected for the direction filter bank, and the direction decomposition parameter is set to [3,4,5];

[0032] Step 2, low-frequency sub-band coefficient fusion and high-frequency sub-band coefficient fusion:

[0033] 1) For low frequency subband coefficients Using the average fusion strategy:

[0034]

[0035] in, and Respectively represent the corresponding low-frequency coefficients of...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an image fusion method based on shift-invariant shearlets and a stack autoencoder. The implementation steps of the image fusion method includes the following steps that: shift-invariant shearlet transformation is utilized to decompose an image to be fused into a low-frequency subband coefficient and a high-frequency subband coefficient, wherein the low-frequency subband coefficient reflects the basic contour of the image and is fused by using weight averaging, and the high-frequency subband coefficient reflects the edge and texture information of the image. The present invention provides a stack autoencoder feature-based fusion method. According to the stack autoencoder feature-based fusion method, a sliding block division method is adopted to divide a high-frequency subband into different blocks; a stack autoencoder network is trained with the blocks adopted as input; the trained network is adopted to encode the blocks, so that features can be obtained; the features are enhanced through using a spatial frequency, so that an activity measure can be obtained; the fusion of the blocks of the high-frequency subband coefficient is performed through using a principle that the larger numerical value of the activity measure is adopted; after all the blocks are fused, the high-frequency subband can be obtained through using inverse sliding window transformation; and a fused image can be obtained through inverse shift-invariant shearlet transformation. Compared with a traditional fusion method, the fusion method of the invention can better preserve edge and texture information in an original image.

Description

technical field [0001] The invention relates to an image fusion method based on translation invariant shear wave and stacked self-encoding, which is a fusion method in the technical field of image processing and is widely used in military applications and clinical medical diagnosis. Background technique [0002] Due to the limited information contained in a single image, it often cannot meet the practical application. Image fusion is a technology that synthesizes images of the same scene collected by multiple sensors into one image through fusion algorithm processing. The fused image can effectively combine the advantages of multiple images to be fused, which is more suitable for human visual perception. Image fusion was proposed in the 1970s. In recent years, due to the rapid development of multi-sensor technology, image fusion has been widely used in the fields of military reconnaissance, medical diagnosis and remote sensing. [0003] Image fusion methods can be roughly ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T5/50G06T9/00
CPCG06T5/50G06T9/00G06T2207/10048G06T2207/10052G06T2207/10081G06T2207/10088G06T2207/20048G06T2207/20221
Inventor 罗晓清张战成王鹏飞檀华廷王骏董静
Owner JIANGNAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products