Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

An image de-fog method based on depth neural network

A deep neural network and image technology, applied in the field of image processing technology and deep learning, can solve the problem that pictures cannot achieve satisfactory dehazing effect, and achieve the effect of high dehazing efficiency, easy implementation and fast speed.

Active Publication Date: 2019-03-15
TIANJIN UNIV
View PDF6 Cites 22 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] Traditional image defogging methods rely on its artificial features to calculate depth or transmittance. However, these artificial features have their own limitations, and cannot achieve satisfactory defogging effects for pictures of certain scenes.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • An image de-fog method based on depth neural network
  • An image de-fog method based on depth neural network
  • An image de-fog method based on depth neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0047] In order to achieve high-quality image defogging, an embodiment of the present invention proposes an image defogging method based on a deep neural network, see figure 1 , see the description below:

[0048] 101: Select the global atmospheric light and atmospheric scattering coefficient, use the depth of field to generate the fog map and its transmittance map; make the fog-free map, fog map and transmittance map into a training set;

[0049] 102: Based on the encoder-decoder architecture, build a generator network including the estimated transmittance sub-network and the dehaze sub-network; and use the linearity of the confrontation loss function, the transmittance L1 norm loss function and the dehaze map L1 norm loss function combined training generator;

[0050] 103: Construct a discriminator network based on the convolutional layer, sigmoid activation function and LeakyReLU function; use the real fog-free image and the defogged image generated by the defogging sub-netw...

Embodiment 2

[0092] The scheme in Embodiment 1 is introduced in detail below in conjunction with specific drawings and calculation formulas, see the following description for details:

[0093] 201: Select the global atmospheric light and atmospheric scattering coefficient, use the depth of field to generate the fog map and its transmittance map; make the fog-free map, fog map and transmittance map into a training set;

[0094] 202: Construct a generator network based on the encoder-decoder architecture including the estimated transmittance subnetwork and the dehaze subnetwork; and use the linearity of the confrontation loss function, the transmittance L1 norm loss function and the dehaze map L1 norm loss function combined training generator;

[0095] 203: Construct a discriminator network based on the convolutional layer, sigmoid activation function, and LeakyReLU function; use the real haze-free image and the dehazed image generated by the dehazing sub-network as positive and negative sam...

Embodiment 3

[0117] The scheme in embodiment 1 and 2 is carried out feasibility verification by experimental data below, see the following description for details:

[0118] Select 3 real scenes with fog images and use the fog removal method of the present invention to remove fog, Figure 4 , Figure 5 and Figure 6 All are real scenes with fog and dehazed images. It can be seen from the results that this method has an ideal dehazing effect on the foggy image of the real scene.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an image fog removing method based on depth neural network, which comprises the following steps: selecting global atmospheric light and atmospheric scattering coefficient, generating fog map and its transmittance map by using depth of field; The training set is composed of non-fog map, fog map and transmittance map. Based on the encoder-decoder architecture, a generator network including an estimated transmittance subnetwork and a defogging subnetwork is constructed. The linear combination of antagonistic loss function, transmittance L1 norm loss function and defoggingL1 norm loss function is used to train the generator. The discriminator network is constructed based on convolution layer, sigmoid activation function and LeakyReLU function. The real non-fog map andthe de-fog map generated by the de-fog sub-network are used as positive and negative samples respectively, and the cross-entropy is used as the cost function to train the discriminator. Adopting generator and discriminator training alternately to carry on the antagonism training; After the training, a fog map to be defogged is input into the generator, and the defogging map is obtained after a forward propagation.

Description

technical field [0001] The invention relates to the fields of image processing technology and deep learning technology, in particular to an image defogging method based on a deep neural network. Background technique [0002] In the case of bad weather conditions, the images taken outdoors are often significantly degraded by the particles suspended in the air, resulting in a series of problems such as a decrease in image contrast and color distortion. Scattered by fog, haze, dust, etc. in the air, so it is the scattered light that finally reaches the camera. Haze images are usually composed of directly attenuated and scattered atmospheric light, the direct attenuation is the light intensity received by the camera after reflection and attenuation on the surface of the object, and the scattered atmospheric light is the scattered atmospheric light received by the camera. Image defogging algorithm has gradually become a research hotspot in military, aerospace, transportation and...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/514G06T5/00G06N3/08G06N3/04
CPCG06N3/08G06T7/514G06N3/048G06N3/045G06T5/00
Inventor 李岳楠刘宇航
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products