Multi-focus image fusion method based on NSCT (Non-Subsampled Contourlet Transform) and depth information incentive PCNN (Pulse Coupled Neural Network)

A multi-focus image and depth information technology, applied in image enhancement, image analysis, image data processing, etc., can solve the problems of fusion image distortion, pseudo-Gibbs phenomenon, and lack of translation invariance

Inactive Publication Date: 2016-05-04
CHINA UNIV OF MINING & TECH
View PDF2 Cites 32 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Wavelet transform is the most commonly used transformation domain fusion method. It has good time-frequency local performance and can well preserve image detail information, but it does not have translation invariance and will produce pseudo-Gibbs phenomenon.
In 2002, the Contourlet transform was proposed to solve the problem of fewer subbands in the decomposition direction of the wavelet transform, but it does not have translation invariance, resulting in distortion of the fusion image. Pseudo-Gibbs phenomenon

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-focus image fusion method based on NSCT (Non-Subsampled Contourlet Transform) and depth information incentive PCNN (Pulse Coupled Neural Network)
  • Multi-focus image fusion method based on NSCT (Non-Subsampled Contourlet Transform) and depth information incentive PCNN (Pulse Coupled Neural Network)
  • Multi-focus image fusion method based on NSCT (Non-Subsampled Contourlet Transform) and depth information incentive PCNN (Pulse Coupled Neural Network)

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0040] The present invention is based on non-subsampling Contourlet transform and depth information to stimulate PCNN's multi-focus image fusion method, such as figure 1 Shown: The same non-subsampling Contourlet transformation is performed on the original input multi-focus image to obtain a low-frequency sub-band image and a series of multi-resolution and multi-directional high-frequency sub-band images. Then apply different fusion strategies to the low-frequency sub-band and high-frequency sub-band to obtain fusion coefficients, and finally perform non-subsampling Contourlet inverse transformation on the obtained fusion coefficients to obtain the final fusion result. The specific steps are as follows:

[0041] (1) On the basis of the preprocessing of the registration of the multi-focus images of the same scene, the multi-focus images I A and I BPerform multi-scale, multi-directional non-subsampling Contourlet transformation, and decompose each of the two images into a low-...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a multi-focus image fusion method based on non-subsampled Contourlet transform and a depth information incentive PCNN (Pulse Coupled Neural Network). The method comprises the following steps of generating a low-frequency sub-band image and a series of high-frequency sub-band images after carrying out multi-scale and multidirectional non-subsampled Contourlet transform on input multi-focus source images; adopting edge information energy based on sub-band coefficients to low-frequency sub-bands to obtain low-frequency sub-band coefficients and adopting a modified PCNN model to high-frequency sub-bands to determine each band-pass sub-band coefficient; and lastly, obtaining a fused image through non-subsampled Contourlet inverse transform. The modified PCNN is mainly embodied in that a factor combined with image depth information through adopting SML (Sum-Modified-Laplacian) capable of describing an image direction and texture information well is taken as input of a model, and most of PCNN-based algorithms take pixel gray values as model input items. The method can be well applied to the field of image fusion, and an experimental result shows that a fusion result which more conforms to an eye vision rule can be provided from both the objective evaluation index and the subjective vision effect.

Description

technical field [0001] The invention belongs to the field of image fusion in image processing, in particular to a multi-focus image fusion method based on non-subsampling Contourlet transform (NSCT) and depth information to stimulate PCNN. Background technique [0002] Multi-focus image fusion technology refers to the fusion of multiple source images whose focus objects are different local objects in the same scene, effectively obtains the clear part of each source image, and finally obtains a high-quality image that contains more comprehensive image information in this space. image. [0003] There are three different levels of image fusion methods: pixel level, feature level, and decision level. The multi-focus image fusion researched by the present invention belongs to the level of pixel-level image fusion, mainly including image fusion based on space domain and image fusion based on transform domain. Most of the current researches are based on transform domain methods. ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T5/50G06N3/04
CPCG06T5/50G06T2207/20221G06T2207/20084G06N3/044
Inventor 丁世飞朱强波
Owner CHINA UNIV OF MINING & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products