Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Low-illumination image enhancement method based on improved depth separable generative adversarial network

An image enhancement, low illumination technology, applied in image enhancement, biological neural network model, image analysis and other directions, can solve the problems of weak robustness, insufficient memory, high computational complexity and time consumption, etc. The effect of complexity, reducing the amount of model parameters, and increasing computational efficiency

Pending Publication Date: 2020-11-10
HUBEI UNIV OF TECH +2
View PDF0 Cites 10 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] (1) Existing CNN-based low-illuminance enhancement models have problems of overly complex calculations and insufficient memory
[0008] (2) In the existing low-light image enhancement technology, the number of model parameters used is too large, and the calculation efficiency is low
[0009] (3) In the existing algorithms that use parameter models and relatively low complexity, the calculation accuracy is low
[0010] (4) The quality and details of the existing methods to enhance the picture are not good, the robustness is not very strong, it is difficult to adapt to low-light images under different lighting environments, and the computational complexity and time consumption are large
[0011] The difficulty in solving the above problems and defects is as follows: Since the CNN-based low-illuminance enhancement model is mainly aimed at the special field of low-illumination pictures, its calculation is complicated, and improving accuracy does not mean that the model is in the same size as other simple models. And the speed becomes more efficient, and due to the limitation of the memory size, the model cannot increase the accuracy and efficiency at the same time by increasing the scale

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Low-illumination image enhancement method based on improved depth separable generative adversarial network
  • Low-illumination image enhancement method based on improved depth separable generative adversarial network
  • Low-illumination image enhancement method based on improved depth separable generative adversarial network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0084] (1) Generate confrontation network

[0085] The new framework for estimating a generative model via an adversarial process trains two models simultaneously: a generative model G that captures the distribution of the data, and a discriminative model D that estimates the probability that a sample came from the training data.

[0086] The generated confrontation network is composed of two parts: the generative model G (generative model) and the discriminative model D (discriminative model). Generative models are used to learn the distribution of real data. A discriminative model is a binary classifier that distinguishes whether the input is real data or generated data. X is real data, conforming to P r (x) distribution. Z is a latent space variable, conforming to P z(z) distribution, such as Gaussian distribution or uniform distribution. Then sample from the hypothetical hidden space z, and generate data x'=G(z) after generating the model G. Then the real data and th...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention belongs to the technical field of image enhancement, and discloses a low-illumination image enhancement method based on an improved depth separable generative adversarial network, and the method comprises the steps: constructing an improved depth separable convolutional generative adversarial network model; training the constructed deep separable convolution generative adversarial network model; and performing low-illumination image enhancement by using the trained depth separable convolution generative adversarial network model. The model parameter quantity can be greatly reduced and the calculation complexity can be reduced while the low-degree image enhancement effect is ensured, so that the problem of insufficient memory in the current research can be solved. Depth separable convolution is introduced and improved, so that the invention is also suitable for a low-illumination image enhancement task while model parameters are reduced, and the calculation efficiency is improved. Compared with a low-illumination image enhancement algorithm with the same calculation complexity and parameter model quantity level, the method has obvious superiority in effect.

Description

technical field [0001] The invention belongs to the technical field of image enhancement, in particular to a low-illuminance image enhancement method based on an improved deep separable generative confrontation network. Background technique [0002] Currently, images carry rich and detailed information of real scenes. By capturing and processing image data, intelligent systems can be developed to perform various tasks such as object detection, classification, segmentation, recognition, scene understanding, and 3D reconstruction, which are then used in many practical applications, such as autonomous driving, video surveillance, and virtual augmented reality . [0003] However, practical systems heavily depend on the quality of the input image. In particular, they may perform well with high-quality input data but otherwise underperform. Typical is the use of images captured in poorly lit environments, where pictures often suffer from severe degradation such as poor visibili...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T5/00G06N3/04G06N3/08
CPCG06N3/08G06T2207/20081G06T2207/20084G06N3/045G06T5/90Y02T10/40
Inventor 王春枝严灵毓魏明谭敏叶志伟刘爱军王早宁张文栋官沙
Owner HUBEI UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products