Multi-source image fusion method

A fusion method and source image technology, applied in the field of image fusion, can solve the problems of consuming a lot of time, ignoring the overall picture of the image, occupying a lot of memory, etc., to improve the visual effect and reduce the effect of "scratch".

Inactive Publication Date: 2011-02-09
CHONGQING UNIV
View PDF4 Cites 19 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Pixel-level image fusion is mainly for the initial image data, and its purpose is mainly image enhancement, image segmentation and image classification, so as to provide better input information for manual interpretation of images or further feature-level fusion, pixel-level image fusion, relying on Due to the sensitivity of sensor sensing devices, high-resolution sensors are required for long-distance images; feature-level image fusion refers to the process of extracting feature information from each sensor image, and performing comprehensive analysis and processing. The extracted feature information is The sufficient

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-source image fusion method
  • Multi-source image fusion method
  • Multi-source image fusion method

Examples

Experimental program
Comparison scheme
Effect test

Example Embodiment

[0036] The preferred embodiments of the present invention will be described in detail below.

[0037] Overall steps:

[0038] Such as figure 1 As shown, a multi-source image fusion method, the steps are:

[0039] In the image feature extraction stage:

[0040] 1) Use the wavelet kernel function to establish a multi-scale support filter for the image A to be fused and the image B to be fused;

[0041] 2) Use a multi-scale support filter to perform support transformation (ie SVT transformation) on the image to be fused to form high and low frequency information;

[0042] 3) Use anti-aliasing contourlet transform method (ie: NACT transform) to process high and low frequency information to obtain high and low frequency anti-aliasing information;

[0043] In the image fusion stage:

[0044] 4) For the low-frequency anti-aliasing information of each image to be fused, pulse coupled neural network (PCNN) fusion rules are used to select the low-frequency anti-aliasing information that can be trig...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a multi-source image fusion method. The method comprises the following steps of: in an early feature extraction stage, combining wavelet kernel-based support transform with anti-aliasing contourlet, wherein on one hand, direction information which cannot be extracted by the support transform is increased, and on the other hand, an aliasing phenomenon generated in contourlet transform is eliminated; and in an image fusion stage, performing fusion judgment on low frequency signals of each fused image by establishing a high frequency judgment device by adopting a pulse coupled neural network, and performing the fusion judgment on high frequency signals of each fused image by adopting an absolute value maximum selection rule. Therefore, effective fusion of high and low frequency signals is realized.

Description

technical field [0001] The invention relates to the field of image fusion, in particular to a multi-source image fusion method. Background technique [0002] The processing of image fusion in the prior art is usually at the pixel level, feature level, and decision-making level. Pixel-level image fusion is mainly for the initial image data, and its purpose is mainly image enhancement, image segmentation and image classification, so as to provide better input information for manual interpretation of images or further feature-level fusion, pixel-level image fusion, relying on Due to the sensitivity of sensor sensing devices, high-resolution sensors are required for long-distance images; feature-level image fusion refers to the process of extracting feature information from each sensor image, and performing comprehensive analysis and processing. The extracted feature information is The sufficient representation or sufficient statistics of pixel information. Typical feature info...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T5/50
Inventor 尚赵伟庞庆堃唐远炎张太平张明新张凌峰
Owner CHONGQING UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products