Image fusion method based on joint convolutional self-coding network

A technology of convolutional auto-encoding and image fusion, applied in the field of image fusion based on joint convolutional auto-encoding network, can solve the problems of inability to obtain training label information, insufficient training data for fusion images, etc., to achieve rich information, good quality, high definition effect

Active Publication Date: 2019-08-06
JIANGNAN UNIV
View PDF2 Cites 13 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0008] The purpose of the present invention is to address the deficiencies of the above-mentioned prior art, and propose an image fusion method based on a joint convolutional autoencoder network, to solve the problem of insufficient training data of fusion images obtained by existing multi-sensor image fusion methods based on neural networks or training Label information cannot be obtained, and fully combined with the proposed joint convolutional autoencoder network's ability to reconstruct images, the image fusion evaluation index is introduced into the loss function of the training network, which can not only reconstruct the original input image, but also effectively protect image details , enhance the image contrast and edge contour, improve its visual effect, and improve the quality of the fusion image

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image fusion method based on joint convolutional self-coding network
  • Image fusion method based on joint convolutional self-coding network
  • Image fusion method based on joint convolutional self-coding network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0055] An embodiment of the present invention ("street" infrared and visible light image) will be described in detail below in conjunction with the accompanying drawings. This embodiment is carried out under the premise of the technical solution of the present invention, as figure 1 As shown, the detailed implementation and specific operation steps are as follows:

[0056] Step 1. During the training process, the image to be fused passes through the private feature branch and the public feature branch of the encoding layer to obtain private features and public features respectively. In order to improve the ability of the joint convolutional self-encoding network for image fusion, we introduced the image fusion evaluation indicators MSE, SSIM, entropy and gradient into the loss function, designed a multi-task loss function for network training, and improved the joint convolutional self-encoding. The feature extraction ability of the network.

[0057] Step 2, during the test pr...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an image fusion method based on a joint convolutional self-coding network, and belongs to the field of image fusion. The method mainly solves the problems that during image fusion, a data set and labels are insufficient, and an end-to-end fusion result of an image is obtained through a joint convolutional self-coding network. The method comprises the following steps: firstly, training a to-be-fused image set by using a joint convolutional self-coding network model in a training process, and designing a multi-task loss function suitable for image fusion for training. Inthe testing process, two to-be-fused images are input into the network model, public features and private features are obtained through the network coding layer, a fusion rule is designed according toredundancy and complementary features of the features, fusion of the feature layers is achieved, and fused feature mapping is decoded and reconstructed to obtain a fused image. According to the method, the characteristics of the self-coding neural network can be fully utilized to integrate complementation and redundant information of the to-be-fused image to formulate a fusion strategy, image details are effectively protected, and compared with a traditional fusion method, the quality of the fused image is greatly improved.

Description

technical field [0001] The invention belongs to the field of image fusion, and relates to an image fusion method based on a joint convolutional self-encoding network, which is widely used in the fields of scene monitoring, battlefield reconnaissance and the like. Background technique [0002] Image fusion is an image enhancement technology, and it is also a research branch and research focus in the field of information fusion. The fused image is generated by fusing images acquired by different sensors. The fused image is robust and contains rich information of the source image, which is beneficial to subsequent image processing. Therefore, the field of image fusion involves a wide range of research and the fusion process is complex. And it is diverse, so it is difficult to have a mature and general-purpose image fusion algorithm suitable for the field of image fusion. Usually, our research objects include: multi-focus image fusion, infrared and visible light image fusion, an...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T5/50G06N3/04G06N3/08
CPCG06T5/50G06N3/08G06T2207/20221G06N3/045Y02T10/40
Inventor 罗晓清张战成熊梦渔张宝成
Owner JIANGNAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products