Image defogging method based on multi-scale residual learning

A multi-scale and residual technology, applied in the field of image processing technology and deep learning, can solve the problems affecting the quality of the dehazing image and the inaccuracy of the algorithm, and achieve the effect of good dehazing effect, simple and easy method, and easy to implement.

Pending Publication Date: 2019-12-13
TIANJIN UNIV
View PDF4 Cites 33 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] Existing image defogging techniques generally estimate the image transmission rate and global atmospheric light, and use the atmospheric scattering m

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image defogging method based on multi-scale residual learning
  • Image defogging method based on multi-scale residual learning
  • Image defogging method based on multi-scale residual learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0039] In order to achieve realistic image defogging, an embodiment of the present invention proposes an image defogging method based on multi-scale residual learning, see figure 1 , see the description below:

[0040] 101: Based on the atmospheric scattering model, use the known depth of field, randomly select the global atmospheric light and atmospheric scattering coefficient, generate a foggy image from the fog-free image, and establish a training set;

[0041] 102: Construct a dehazing network based on an encoder and a decoder. The network structure is built using a multi-scale residual learning module. The multi-scale residual learning module uses receptive fields and feature maps of different scales to extract features related to fog;

[0042] 103: Use the linear combination of L1 norm loss function, perceptual loss function and mean loss function to train the defogging network;

[0043] 104: Using the trained model parameters, input a foggy image to get the defogged im...

Embodiment 2

[0074] The scheme in Embodiment 1 is introduced in detail below in conjunction with specific drawings and calculation formulas, see the following description for details:

[0075] 201: Based on the atmospheric scattering model, use the known depth of field, randomly select the global atmospheric light and atmospheric scattering coefficient, generate a foggy image from the fog-free image, and establish a training set;

[0076] 202: Construct a dehazing network based on an encoder and a decoder. The network structure is mainly constructed using a multi-scale residual learning module. The multi-scale residual learning module uses receptive fields and feature maps of different scales to extract features related to fog;

[0077] 203: Use the linear combination of L1 norm loss function, perceptual loss function and mean loss function to train the defogging network;

[0078] 204: Using the trained model parameters, input a foggy image to get a dehazed image.

[0079] Wherein, the sp...

Embodiment 3

[0096] The scheme in embodiment 1 and 2 is carried out feasibility verification by experimental data below, see the following description for details:

[0097] Select 2 outdoor fog-free images from the Internet, use step 201 in Example 2 to add fog, get 2 foggy images, and then select 1 outdoor real scene foggy image. The above 3 pictures are defogged using the defogging method of the present invention, Figure 4 , Figure 5 and Figure 6 They are the fogged image and the dehazed image respectively. It can be seen that this method can effectively dehaze the image.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an image defogging method based on multi-scale residual learning, and the method comprises the steps: randomly selecting global atmospheric light and atmospheric scattering coefficients through a known depth of field based on an atmospheric scattering model, generating a foggy image through a fogless image, and building a training set; creating a defogging network based onan encoder and a decoder, creating a network structure by adopting a multi-scale residual learning module, and enabling a multi-scale residual learning module to extract features related to fog by utilizing receptive fields and feature maps of different scales; training a defogging network by adopting a linear combination of an L1 norm loss function, a perception loss function and a mean value loss function; and inputting a foggy image by using the trained model parameters to obtain a defogged image. Complex hypothesis and prior are not needed, the fogless image can be directly recovered fromone foggy image, and the method is simple and easy to implement.

Description

technical field [0001] The invention relates to the fields of image processing technology and deep learning technology, in particular to an image defogging method based on multi-scale residual learning. Background technique [0002] Smog is a common atmospheric phenomenon. In hazy weather, light will be scattered by particles suspended in the air (such as smoke particles, dust and water droplets, etc.). Suspended particles will simultaneously scatter the reflected light of the actual scene and atmospheric ambient light, causing many adverse effects on the imaging process, such as low contrast, color distortion, etc. Haze not only affects the visual effect of images, but also affects the understanding and analysis of computer vision systems. With the development of computer vision, computer vision applications have been involved in many fields such as traffic monitoring, target recognition, target tracking and image classification. Computer vision systems have high require...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T5/00G06N3/08G06N3/02
CPCG06T5/003G06N3/08G06N3/02G06T2207/20081G06T2207/20084Y02A90/10
Inventor 李岳楠刘宇航
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products