dual-light image fusion model based on a depth convolution antagonism generation network DCGAN

A technology of deep convolution and image fusion, which is applied in graphics and image conversion, image data processing, character and pattern recognition, etc., can solve problems such as poor image fusion effect, and achieve the effect of reducing hardware pressure

Pending Publication Date: 2019-02-19
STATE GRID GANSU ELECTRIC POWER CORP +1
View PDF1 Cites 11 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] The purpose of the present invention is to provide a dual-light image fusion model based on deep convolution confrontation generation network DCGAN to solve the problem of poor image fusion effect in traditional image fusion and enhancement methods

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • dual-light image fusion model based on a depth convolution antagonism generation network DCGAN
  • dual-light image fusion model based on a depth convolution antagonism generation network DCGAN
  • dual-light image fusion model based on a depth convolution antagonism generation network DCGAN

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0032] The present invention will be described in further detail below in conjunction with the accompanying drawings.

[0033] like figure 1 Shown is a dual-light image fusion model based on deep convolutional confrontation generation network DCGAN. This model fuses visible light images and infrared light images of the same object. The establishment process of this model includes the following steps:

[0034] Step 1. Discrimination network extracts features: First, a large number of visible light images and infrared light images of the object are scaled to the same size according to the image ratio to form an image training library; then a convolutional neural network with the vgg network as the initial parameter is constructed as the discrimination network , use the image library to train the identification network, so that the identification network can effectively distinguish between infrared light images and visible light images; then input the visible light images and inf...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a dual-light image fusion model based on a depth convolution antagonism generation network DCGAN. The model extracts image features of the same target under visible light and infrared light through a depth discrimination convolution network, and sparsely encodes the two image features according to the same feature dictionary; the dual-light image fusion model extracts imagefeatures of the same target under visible light and infrared light through a depth discrimination convolution network. Then the coding feature is fused and used as the input data of the depth convolution generation network, so that the generation network generates the fusion image. Finally, the error training model between the fusion feature and the coding fusion feature is used to generate the dual-light fusion image. The model uses depth learning network to extract and encode the features of visible and infrared images, and the feature points of the two images can be automatically matched by fusing the features of the encoding. The model of the invention can be called at any time after training, and the double-light image with high fusion quality can be automatically generated by inputting visible light image and infrared light image at the same time.

Description

technical field [0001] The invention relates to the technical field of computer image acquisition and synthesis, in particular to a dual-light image fusion model based on a deep convolutional confrontation generation network (DCGAN). Background technique [0002] In recent years, with the continuous development of modern science and technology and the widespread popularization of the Internet, the various functions of temperature night vision monitoring equipment have been greatly improved, and its application cost has also been greatly reduced. Therefore, it has become very popular to use infrared surveillance cameras to obtain temperature information of various industrial machinery and equipment, and many methods of processing infrared surveillance images have also emerged. General infrared detectors mainly perform imaging by receiving the infrared radiation radiated or reflected by the target in the scene, which has a strong penetrating ability to smoke, and still has goo...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T3/40G06K9/62
CPCG06T3/4046G06T3/4061G06F18/23213
Inventor 齐兴顺王胜利张忠元方勇李哲唐凯黎炎焦小强陈杨王宇张波邓璐韩冬
Owner STATE GRID GANSU ELECTRIC POWER CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products