Self-supervised learning fusion method for multi-band images

A fusion method and supervised learning technology, applied in the field of image fusion, which can solve the problems of limited fusion results and lack of labeled images.

Active Publication Date: 2020-11-10
ZHONGBEI UNIV
View PDF20 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] In order to solve the problem of limited fusion results due to the lack of label images when using deep learning methods in the field of image fusion to fuse multi-band

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Self-supervised learning fusion method for multi-band images
  • Self-supervised learning fusion method for multi-band images
  • Self-supervised learning fusion method for multi-band images

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0027] The self-supervised learning fusion method of multi-band images based on multi-discriminator comprises the following steps:

[0028] The first step is to design and build a generative confrontation network: design and build a multi-discriminator generative confrontation network structure. The multi-discriminator generative confrontation network consists of a generator and multiple discriminators; taking n-band image fusion as an example, a generator device and n discriminators.

[0029] The generator network structure consists of two parts: a feature enhancement module and a feature fusion module. The feature enhancement module is used to extract the features of source images in different bands and enhance them to obtain multi-channel feature maps of each band. The feature fusion module uses the merged connection layer in the channel Dimensionally perform feature connection and reconstruct the connected feature map into a fusion image, as follows:

[0030] The feature ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a fusion method for multi-band images, in particular to a self-supervised learning fusion method for multi-band images based on a multi-discriminator generative adversarial network, which comprises the following steps of: designing and constructing a generative adversarial network which consists of a generator and a plurality of discriminators, and a label image is a multi-band source image; wherein a generator network structure is composed of a feature enhancement module and a feature fusion module which are conceived, acquiring a generation model through dynamic balance training of a generator and a discriminator to acquire a multi-band image fusion result. A multi-band image end-to-end self-supervision fusion neural network is realized, a result has better definition and information amount, the detail information is richer, and the method better conforms to the visual characteristics of human eyes.

Description

technical field [0001] The invention relates to an image fusion method, in particular to a multi-band image fusion method, specifically a self-supervised learning fusion method for multi-band images. Background technique [0002] At present, high-precision detection systems have generally adopted wide-spectrum multi-band imaging, and existing researches are mainly carried out on two bands of infrared and visible light. Therefore, it is imminent to explore the synchronous fusion of multiple (≥3) images. In recent years, image fusion research based on deep artificial neural networks has emerged. However, due to the lack of standard fusion results in the field of image fusion, that is, there is generally a lack of label data when using deep learning to build image fusion models, resulting in difficulties in deep learning training or poor fusion effects. The more the number of synchronously fused images, the more prominent the problem will be. [0003] Self-supervised learning ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T5/50G06K9/62G06N3/04G06N3/08
CPCG06T5/50G06N3/08G06T2207/20221G06N3/045G06F18/256G06F18/253G06F18/24G06F18/214
Inventor 蔺素珍田嵩旺禄晓飞李大威李毅王丽芳
Owner ZHONGBEI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products