Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-modal image fusion method based on generative adversarial network and super-resolution network

A multi-modal image, super-resolution technology, applied in the field of image fusion

Inactive Publication Date: 2019-02-12
ZHONGBEI UNIV
View PDF6 Cites 60 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] In order to solve the problem of self-adaptive fusion of multi-modal images, the present invention proposes a multi-modal image fusion method based on generative confrontation network and super-resolution network

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-modal image fusion method based on generative adversarial network and super-resolution network
  • Multi-modal image fusion method based on generative adversarial network and super-resolution network
  • Multi-modal image fusion method based on generative adversarial network and super-resolution network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0024] A multi-modal image fusion method based on generation confrontation network and super-resolution network, comprising the following steps:

[0025] 1. Design and build a generative confrontation network structure

[0026] The generator network structure is a residual-based convolutional neural network consisting of three convolutional layers and seven residual blocks, each of which contains two convolutional layers; the discriminator consists of six convolutional layers, A standard three-layer residual unit block and a fully connected layer are composed, as follows:

[0027] (1) Input the multi-band source image or multi-modal medical source image into the generator, perform a convolution operation, the convolution kernel size is 3×3×64, and then input 7 residual blocks, each residual block is composed of Consisting of two convolutional layers, the target to be learned is F(x)=H(x)-x, where x represents the network input, H(x) represents the expected output, and F(x) re...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention relates to an image fusion method, in particular to a multimodal image fusion method, especially a multi-modal image fusion method based on a generative adversarial network and asuper-resolution network. The method is carried out according to the following steps of designing and constructing the generative adversarial network, wherein the network structure adopts a conceiveddepth residual neural network, and obtaining a generating model through the dynamic balance training of a generator and a discriminator; constructing the super-resolution network based on the convolution layer; inputting the multi-band / multi-mode source image into the generating model to obtain the preliminary fusion image; and then inputting the image into the trained super-resolution network toget the final fusion image with high quality. The method realizes the end-to-end neural network fusion of multi-band / multi-mode images, avoids the difficulties of image multi-scale and multi-direction decomposition and fusion rule design based on prior knowledge, and realizes the adaptive network fusion.

Description

technical field [0001] The present invention relates to an image fusion method, in particular to a multi-modal image fusion method, specifically a multi-modal image fusion method based on a generation confrontation network and a super-resolution network. Background technique [0002] Multi-band / multi-modal imaging systems have been widely used in military, medical, industrial inspection and many other fields. Image fusion is one of the key technologies for these systems to achieve high-precision intelligent detection. The current image fusion technology can be roughly divided into two categories: air domain and frequency domain. The former algorithm is simple and fast, and is widely used in hardware systems, but the fusion effect is limited; the latter adopts more targeted fusion rules for the multi-scale and multi-directional decomposition results of the original image, which can improve the fusion effect. Current research hotspots, but often the algorithms are complex, re...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T5/50G06T3/40G06N3/04
CPCG06T3/4038G06T3/4053G06T5/50G06T2207/10048G06T2207/10088G06T2207/10081G06T2207/10004G06T2207/20081G06T2207/20221G06N3/045
Inventor 蔺素珍杨晓莉李大威王丽芳
Owner ZHONGBEI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products