Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Image bit enhancement method based on deep learning

A deep learning and deep image technology, applied in the field of deep learning, can solve the problems of blurred image details and light-colored outlines, and achieve the effects of difficult model training, reduced computational complexity, and enhanced bit depth

Inactive Publication Date: 2020-02-28
TIANJIN UNIV
View PDF3 Cites 11 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] At present, many algorithms for image bit depth enhancement based on simple calculations have been proposed, but most of the algorithms cannot solve the problem of image false contours well.
Some algorithms are calculated based on interpolation, which can eliminate false contours to a large extent, but generally cause image details and light contours to be blurred, especially in the LMM region (local maximum and minimum region, Local Maximum / Minimum Region )

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image bit enhancement method based on deep learning
  • Image bit enhancement method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0032] The embodiment of the present invention proposes an image bit enhancement method based on deep learning, and perceives the loss function through gradient descent [7] The method of training the model includes the following steps:

[0033] 101: Preprocess the images in the Sintel database with high-bit lossless image quality and quantize them to low-bit images;

[0034] Among them, the Sintel database comes from a lossless animation short film, and the specific sample images are as follows: figure 2 shown. The images are preprocessed and used to train the convolutional neural network.

[0035] 102: Design a convolutional neural network based on deep learning, take the quantized low-bit image as input, and use the perceptual loss between the output result and the original high-bit image as a loss function;

[0036] Among them, such as model figure 1 As shown, the convolutional neural network uses a transposed convolution (TransposedConvolutional Layer) [8] , and borr...

Embodiment 2

[0041] The scheme in embodiment 1 is further introduced below, see the following description for details:

[0042] 201: The Sintel database comes from a lossless animation short film, the image is preprocessed and used to train the convolutional neural network;

[0043] Among them, the Sintel database contains 21,312 frames of 16-bit pictures, each of which has a size of 436x1024. The content of the images covers a variety of scenes, including snow mountains, sky, towns, caves, etc. In order to effectively reduce the memory usage during the training process, after randomly sampling the images in the database, 1000 images are cut into 96x96 small images and stored in the form of numpy arrays. During the training process, the 16-bit image is quantized to 4 bits and input to the convolutional neural network.

[0044] 202: The convolutional neural network uses transposed convolution, and adds skip connections (SkipConnections) between the transposed convolutional layers, and the ...

Embodiment 3

[0052] The evaluation indicators and related bit depth enhancement algorithms at home and abroad are introduced in detail below, and the effects of the solutions in Examples 1 and 2 are evaluated. See the description below for details:

[0053] 1000 images randomly selected from the Sintel database are used to train the model. In order to ensure the accuracy of the test results, the experiment randomly selects images from the remaining image set as the test set to evaluate the experimental effect.

[0054] This method uses two evaluation metrics to evaluate the generated high-bit images:

[0055] Peak Signal to Noise Ratio (PSNR): PSNR is the most common and widely used objective evaluation index for evaluating the similarity between images. PSNR is based on the difference between corresponding pixels between images, that is, an image quality evaluation based on error sensitivity. Since the visual characteristics of the human eye are not taken into account, the objective eva...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an image bit depth enhancement method based on a deep learning network, and the method comprises the following steps: carrying out the preprocessing of a high-bit lossless-image-quality image, and carrying out the quantification of the high-bit lossless-image-quality image into a low-bit image; designing a convolutional neural network based on deep learning, taking the quantized low-bit image as input, and taking perception loss between an output result and the original high-bit image as a loss function; training parameters of each convolution layer and each batch normalization layer in the model through an optimizer gradient descent loss function, and storing the corresponding model and parameters of all layers in the model after the descent amplitude of the modelloss function does not exceed a threshold value; and after the high-bit image is quantized into low bits, recovering the high-bit image through the stored convolutional neural network. According to the method, a convolutional neural network framework in deep learning is utilized, so that a high-quality high-bit image can be accurately recovered.

Description

technical field [0001] The invention relates to the field of deep learning, in particular to an image bit enhancement method based on deep learning. Background technique [0002] With the development of science and technology, while people's material and cultural level is constantly improving, the requirements for the visual quality provided by the display are also constantly improving. People gradually hope that the display can provide a picture with higher definition and color closer to the real scene. Under such circumstances, high-definition displays and HDR (High Dynamic Range, High Dynamic Range) displays gradually occupy a larger market share by virtue of their excellent visual experience. [0003] However, most of the existing pictures and video data are shot and stored in low-bit form. Each color channel of each pixel in most images and videos is stored with 8 bits, so each color channel can represent up to 256 colors. Some network cameras even use 5, 6, and 5 bi...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T5/00G06N3/04G06N3/08
CPCG06T2207/10016G06T2207/10024G06T2207/20081G06T2207/20084G06N3/08G06N3/045G06T5/73
Inventor 苏育挺孙婉宁刘婧
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products