Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-focus image fusion method based on multi-scale transformation and convolution sparse representation

A technology of multi-focus image and fusion method, applied in the field of multi-scale transformation model and convolution sparse representation model, can solve the problem of low brightness of the convolution sparse representation fusion algorithm with contrast loss, and achieve obvious fusion effect, good management and merging effect. Good results

Inactive Publication Date: 2020-07-17
SICHUAN POLICE COLLEGE
View PDF6 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] Aiming at the defects of the prior art, the present invention provides a multi-focus image fusion method based on multi-scale transformation and convolution sparse representation, which solves the contrast loss and convolution sparse representation fusion algorithm of the multi-scale transformation fusion algorithm in the prior art low brightness and other technical issues

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-focus image fusion method based on multi-scale transformation and convolution sparse representation
  • Multi-focus image fusion method based on multi-scale transformation and convolution sparse representation
  • Multi-focus image fusion method based on multi-scale transformation and convolution sparse representation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0041] Multi-scale transformation and convolution sparse representation fusion model:

[0042] Convolutional sparse representation can be seen as a sparse representation replacement model using a convolutional form that aims to achieve a sparse representation of the entire image rather than local image patches. The basic idea of ​​convolutional sparse representation is to take the entire image s ∈ R N Modeled as a coefficient map x m ∈ R N Its corresponding dictionary filter d m ∈ R n×n×m The sum of a set of convolutions between (n

[0043]

[0044] where * represents the convolution operator. The Alternating Direction Multiplier Method (ADMM) based Convolutional Basis Pursuit Denoising (CBPDN) algorithm solves the above problem (1). The framework of multi-scale transformation and convolution sparse representation fusion model is as follows: figure 1 shown. For the convenience of description, two geometrically registered...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a multi-focus image fusion method based on multi-scale transformation and convolution sparse representation. The multi-focus image fusion method comprises the steps of: step 1,multi-scale transformation; step 2, low-pass component fusion; step 3, high-pass component fusion; and step 4, multi-scale inverse transformation reconstruction. The method has the advantages that the fusion effect is obvious, the detail capturing capability of the multi-scale transformation model is utilized; translation invariance of a convolution sparse representation model is introduced intomulti-focus image fusion, an obtained image well manages a boundary area between a near-focus part and a far-focus part, most details in a source image are extracted, and the boundary area merging effect is good.

Description

technical field [0001] The invention relates to the technical field of multi-scale transformation models and convolution sparse representation models, in particular to a multi-focus image fusion method based on multi-scale transformation and convolution sparse representation. Background technique [0002] The existing multi-scale transformation model technology has low contrast of the fused image obtained. For example, the image fused by Laplace pyramid transformation is easy to blur in some areas, thus losing details and edge information; wavelet transformation is easily affected by the captured signal. Factors such as low quality of direction information carried cause the edges of the fused image to become blurred; the Curvelet transform is insufficient in expressing information in certain areas of the fused image; the NSCT method does not perform well in detail capture, resulting in contrast in the fused image loss. Although the image fusion of convolutional sparse repre...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T5/50
CPCG06T5/50G06T2207/10148G06T2207/20221
Inventor 张铖方高志升
Owner SICHUAN POLICE COLLEGE
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products