Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Image fusion method based on depth learning

An image fusion and deep learning technology, applied in the field of image fusion based on deep learning, can solve the problems of manual definition of filters, difficulty in obtaining prior knowledge, and reduction of dependence on prior knowledge.

Active Publication Date: 2017-08-29
ZHONGBEI UNIV
View PDF6 Cites 37 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The former only implements support value filter adaptation, and cannot achieve filter adaptation for fusions that are not suitable for support transformation. Although the latter implements fusion rule adaptation, the filter needs to be manually defined.
In general, these studies have confirmed that the parameters learned by deep artificial neural network are more and more comprehensive, and can be self-adaptive with full strain, reducing the dependence of the method on prior knowledge, but they have not solved the various problems in image fusion. Scale transformation must be based on prior knowledge to select the filter type, the number of decomposition layers, the number of directions, etc.
In actual detection, it is often extremely difficult to obtain the above prior knowledge
[0004] Therefore, there is a need for a new method to solve the problem of relying on prior knowledge when merging images with multi-scale transformation and other methods, which are difficult to engineer

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image fusion method based on depth learning
  • Image fusion method based on depth learning
  • Image fusion method based on depth learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0023] A method for image fusion based on deep learning, comprising the following steps:

[0024] 1. Construct the basic unit of deep stacked convolutional neural network

[0025] The deep stacked convolutional neural network is composed of multiple basic units stacked. The basic unit is composed of a high-frequency subnetwork, a low-frequency subnetwork and a fusion convolutional layer. The high-frequency subnetwork and low-frequency subnetwork are respectively composed of three convolutional layers. , where the first convolutional layer restricts the input information, the second convolutional layer combines the information, and the third convolutional layer combines the information into high-frequency and low-frequency feature maps, specifically as follows:

[0026] (1) Input the source image x, and get the feature map of the first convolutional layer H1 of the high-frequency subnetwork through convolution operation In the formula, Represents the convolution operation,...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention relates to an image fusion method, especially to an image fusion method based on depth learning. The method comprises: employing a convolution layer to construct basic units based on an automatic encoder; stacking up a plurality of basic units for training to obtain a depth stack neural network, and employing an end-to-end mode to regulate the stack network; employing the stack network to decompose input images, obtaining high-frequency and low-frequency feature mapping pictures of each input image, and employing local variance maximum and region matching degree to merge the high-frequency and low-frequency feature mapping pictures; and putting a high-frequency fusion feature mapping picture and a low-frequency fusion feature mapping picture back to the last layer of the network, and obtaining a final fusion image. The image fusion method based on depth learning can perform adaptive decomposition and reconstruction of images, one high-frequency feature mapping picture and one low-frequency mapping picture are only needed when fusion, the number of the types of filters do not need artificial definition, the number of the layers of decomposition and the number of filtering directions of the images do not need selection, and the dependence of the fusion algorithm on the prior knowledge can be greatly improved.

Description

technical field [0001] The invention relates to an image fusion method, in particular to an image fusion method based on deep learning. Background technique [0002] Image fusion is one of the key technologies of complex detection systems. Its purpose is to combine multiple images or sequence detection images of the same scene into a more complete and comprehensive image for subsequent image analysis, target recognition and tracking. Multi-scale transformation domain fusion is the main method currently used, but it is often necessary to select appropriate multi-scale transformation methods and fusion rules based on prior knowledge, which is not conducive to engineering applications. [0003] Recently, deep learning has successfully broken through the constraints of fixed state models in many fields such as image classification and target tracking. There are also some exploratory researches in the field of image fusion. Low-frequency images are fused separately; multi-scale ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62
CPCG06F18/251G06F18/253G06F18/214
Inventor 蔺素珍韩泽郑瑶
Owner ZHONGBEI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products