An unsupervised image fusion method based on deep learning

A technology of image fusion and deep learning, applied in the field of image processing, can solve the problems of no evaluation index for image fusion results, difficult to learn, difficult to apply to mobile terminals, etc., and achieve high-quality fusion effects

Active Publication Date: 2019-06-21
ARMY ENG UNIV OF PLA
View PDF1 Cites 18 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, due to the complex structure of the deep convolutional neural network model and the single fusion strategy, the model requires a large amount of storage and computing resources in practical applications, and it is difficult to apply it to mobile terminals such as mobile phones.
At the same time, since there is no strict evaluation index for image fusion results, it is difficult to learn from supervised information.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • An unsupervised image fusion method based on deep learning
  • An unsupervised image fusion method based on deep learning
  • An unsupervised image fusion method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0024] The present invention is described in further detail now in conjunction with accompanying drawing.

[0025] Such as figure 1 A light-weight unsupervised image fusion method based on deep learning is shown, including the following steps:

[0026] Step S1: Acquire visible light and infrared images, and use a computer to preprocess the images to construct a data set for training an image fusion network, which contains pairs of infrared and visible light images.

[0027] In this embodiment, the acquired infrared and visible light images need to be paired, that is, they are taken at the same position and at the same time, and the images acquired from different data sources do not need to be scaled to the same scale; when constructing the training data set, when the data Stop collecting data when the set size contains a preset number of images.

[0028] Specifically, the following content is included in step S1:

[0029] 1.1. The infrared and visible light images to be col...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

An unsupervised image fusion method based on deep learning comprises the following steps that visible light and infrared images are obtained, the images are preprocessed through a computer, a data setused for training an image fusion network is constructed, and the data set comprises paired infrared and visible light images; A lightweight deep convolutional neural network is constructed, and thenetwork can perform weighted fusion and decoding reduction on input visible light and infrared images; A hybrid loss function is constructed, the hybrid loss function comprises image generation loss and structure loss, the hybrid loss function is used for training a deep convolutional neural network, and depth image fusion network model parameters are obtained; After model learning is finished, the decoding network is removed, visible light and infrared images can be input through the network, and the output of the network is the fused image. According to the invention, a lightweight image fusion method is realized, and a high-quality fusion effect can be achieved in mobile equipment and embedded equipment with limited computing resources.

Description

technical field [0001] The invention belongs to the technical field of image processing, and in particular relates to an unsupervised image fusion method based on deep learning. Background technique [0002] With the development of information technology, digital images are widely used in various scenarios. However, the use of multiple sensors also brings redundancy of information and increased complexity of analysis. How to make better comprehensive use of multi-source sensing information, merge multi-source redundant information, and construct more fused information at the same time has become a key problem that scientists need to solve urgently. Image fusion is one of the key issues in complex detection systems. Its purpose is to use specific algorithms to synthesize multiple source images of the same scene into a new image with more complete information. Although image fusion has been studied for a long time, due to limitations in practical applications, current fusion...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T5/50
Inventor 李阳王继霄苗壮王家宝张睿卢继荣
Owner ARMY ENG UNIV OF PLA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products