High dynamic range image tone mapping method and system based on deep learning

A technology of high dynamic range and tone mapping, applied in the field of computer vision, can solve the problem of easy loss of a large number of detailed image local contrast, and achieve the effect of solving boundary problems

Active Publication Date: 2019-09-03
SHENZHEN UNIV
View PDF7 Cites 10 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This type of method is simple and fast to calculate, but it is

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • High dynamic range image tone mapping method and system based on deep learning
  • High dynamic range image tone mapping method and system based on deep learning
  • High dynamic range image tone mapping method and system based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

specific Embodiment 1

[0130] The first embodiment is applied to high dynamic range image tone mapping.

[0131] ①Set tone mapping network. For details on the network structure, see figure 1 For the part from Input to Output in the framework, the input is the data of one channel, and the output is also the data of one channel.

[0132] ② Set up a network to calculate image features. VGGNet is used to calculate deep sensory features of images, and LHN is used to calculate image histogram features.

[0133] (a) The input of VGGNet is the data of 3 channels, and the output of the tone mapping network is only the data of 1 channel, so it will be repeatedly stacked into 3 channels in the channel dimension as the input of VGGNet.

[0134] (b) The input of LHN is the data of 1 channel. The output of the tone mapping network is equally divided into 15×15=225 small regions, and the length and width of each small region are respectively 1 / 15 of the length and width of the output of the tone mapping networ...

specific Embodiment 2

[0156] The second embodiment is applied to image enhancement of low-light ordinary images, and does not involve ground truth.

[0157] The difference from the first embodiment is that in ②, the output of the tone mapping network is not evenly divided into multiple small areas, but directly used as the input of the LHN. 2. All the other operations are the same as those in Embodiment 1.

[0158] Different from the specific embodiment 1, color compensation is performed in ⑥, and the cr and cb channels of the original low-light image are combined with the output of the tone mapping network to obtain a new complete YCbCr three-channel, and then converted back to the RGB color space, Get the final result.

specific Embodiment 3

[0160] The third embodiment is applied to image enhancement of low-light ordinary images, and involves ground truth.

[0161] ①Set up the image enhancement network. It is sufficient to modify the tone mapping network in the first embodiment to input of 3 channels and output of 3 channels.

[0162] ② Set up a network to calculate image features. VGGNet is used to calculate deep sensory features of images, and LHN is used to calculate image histogram features. The output of the image enhancement network is directly used as the input of VGGNet, and each channel of the output of the image enhancement network is used as the input of different LHNs.

[0163] ③Set the calculation method of the loss function.

[0164] L VGG =||T VGG (O)-T VGG (I)|| 2

[0165] L Hist o gram =||T LHN (O R )-GTH R || 1 +||T LHN (O G )-GTH G || 1 +||T LHN (O B )-GTH B || 1

[0166] L total =L VGG +L Histogram

[0167] Among them, L Histogram Represents the histogram feature los...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a high dynamic range image tone mapping method and system based on deep learning. The method comprises the following steps of constructing a tone mapping network framework; preprocessing after a high-dynamic-range image is inputted, and calculating a total loss function through a global sensory characteristic loss function and a local histogram characteristic loss function;training the network for the tone mapping network framework according to the total loss function; and when the training result converges, stopping training and obtaining the output of the tone mapping network. Due to the neural network framework in the invention, two cost functions based on a histogram can be optimized, also the tone mapping can be realized end to end, the boundary problem between small areas is solved, and a high-quality low-dynamic-range image is directly obtained.

Description

technical field [0001] The present invention relates to the technical field of computer vision, in particular to a deep learning-based high dynamic range image tone mapping method and system thereof. Background technique [0002] It is still a very challenging problem to directly capture clear high dynamic range images under complex lighting conditions. The current mainstream method is to take multiple photos with different exposures, and then calculate and fuse them to obtain a high dynamic range image. . However, traditional display devices such as TV computer screens and mobile phone screens can only display low dynamic range images, that is, the dynamic range is less than or equal to 256. Therefore, a high dynamic range image needs to be mapped to a low dynamic range image by a tone mapping method, and then displayed. Therefore, the tone mapping method is the key research object, which can be roughly divided into two categories: global-based methods and local-based met...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T5/00G06T5/40G06N3/08
CPCG06T5/008G06T5/40G06N3/08G06T2207/20081G06T2207/20084G06T2207/20208
Inventor 廖广森罗鸿铭侯贤旭邱国平
Owner SHENZHEN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products