Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-exposure image fusion method based on attention generative adversarial network

A technology of image fusion and attention, applied in biological neural network models, image enhancement, image analysis, etc., can solve problems such as image dynamic range enhancement

Pending Publication Date: 2020-07-17
BEIJING UNIV OF TECH
View PDF0 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0009] The purpose of the present invention is to overcome the fact that the existing multi-exposure image fusion method relies on artificially defined fusion weights in different calculation methods, aiming at the problem of image dynamic range enhancement based on the multi-exposure image fusion method, it provides a multi-exposure based on attention generation confrontation network Fusion method, the generative confrontation network can use the channel attention mechanism to adaptively determine the weight of each input, use the spatial attention mechanism to adaptively determine the weight of any spatial position, and realize data-driven adaptive end-to-end multi-exposure images fusion

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-exposure image fusion method based on attention generative adversarial network
  • Multi-exposure image fusion method based on attention generative adversarial network
  • Multi-exposure image fusion method based on attention generative adversarial network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0039] Below in conjunction with accompanying drawing of description, the embodiment of the present invention is described:

[0040] The present invention is composed of the following parts. Firstly, the construction of the generation confrontation network based on the attention mechanism is composed of two parts: the generation network and the discrimination network, and the attention mechanism is introduced in the generation network; secondly, the multi-exposure image fusion generation network and the discrimination network Adversarial training, through the training sample set, the generator network and the discriminant network are alternately trained successively, so as to obtain the network parameters of the generator network; the last is the test stage, the multi-exposure fusion stage takes 3 different exposure images as input, and the trained generated network for multi-exposure image fusion. The specific process is described below.

[0041] (1) Network construction

...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a multi-exposure image fusion method based on an attention generative adversarial network. The thought of the attention mechanism is highly matched with the detail weighting problem in multi-exposure fusion, the weight of each input image can be adaptively selected by applying channel attention, and the weights of different spatial positions are adaptively selected by usingspatial attention. The technology has a wide application prospect in various multimedia vision fields. According to the algorithm, a new attention generative adversarial network is designed to be usedfor a multi-exposure image fusion task, and a visual attention mechanism is introduced into the generative network, so that the network can be helped to adaptively learn weights of different input images and different spatial positions to achieve a better fusion effect.

Description

technical field [0001] The invention belongs to the field of digital image / video signal processing, in particular to a multi-exposure image fusion method based on attention generation confrontation network. Background technique [0002] With the development of computer and multimedia technology, various multimedia applications have put forward extensive demands on high-quality images. High-quality images can provide rich information and real visual experience. However, during the image acquisition process, due to the influence of image acquisition equipment, acquisition environment, noise and other factors, the images presented on the display terminal are often low-quality images. Therefore, how to reconstruct high-quality images from low-quality images has always been a challenge in the field of image processing. [0003] From bright sunlight to dim starlight, illumination intensities in natural scenes can span a very large dynamic range, with brightness contrasts exceedi...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/00G06N3/04G06N3/08
CPCG06T7/0002G06N3/084G06T2207/20081G06N3/045
Inventor 李晓光吴超玮黄江鲁卓力李嘉锋
Owner BEIJING UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products