An image coloring method based on a self-attention generative adversarial network

An image coloring and attention technology, applied in biological neural network model, image analysis, image data processing and other directions, can solve the problems of color blur, color overflow, etc., and achieve the effect of good coloring effect.

Active Publication Date: 2019-05-03
福建帝视信息科技有限公司
View PDF7 Cites 29 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the more common method is to use the pixel-by-pixel L1 or L2 norm to calculate the difference between the reconstructe

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • An image coloring method based on a self-attention generative adversarial network
  • An image coloring method based on a self-attention generative adversarial network
  • An image coloring method based on a self-attention generative adversarial network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0044] Such as Figure 1-5 As shown in one of them, the present invention discloses an image coloring method based on self-attention generation confrontation network, which includes the following steps:

[0045] Step 1: In order to train the grayscale image generation model, select the Konachan high-definition animation image data set, randomly intercept the original 2K or 4K resolution image data to become a color original image, and then rotate each color original image, after the mirroring operation , and the corresponding grayscale image is obtained through RGB conversion to grayscale image operation. Then the grayscale image I C and color original image I C Cut into sub-images of 1×512×512 and 3×512×512 respectively, and perform normalization processing to map the image pixel values ​​to the [-1,1 interval to obtain the training data set.

[0046] Step 2: Expand the grayscale image in the training data set to three dimensions, which is consistent with the expected colo...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an image coloring method based on a self-attention generative adversarial network. The method comprises the following steps: step 1, training a grayscale image coloring model;2, inputting the gray level images in the training data set into an adversarial network to execute a feature extraction stage, a feature fusion stage, a deconvolution calculation stage and a self-attention learning stage to reconstruct corresponding color images; step 3, comparing the color image reconstructed after the self-attention learning with the corresponding original color image, and calculating a penalty function shown in the specification; and step 4, calculating a penalty function according to the formula shown in the specification, 4, taking the loss function as the optimization loss of the GAN on the basis of the formula shown in the specification, wherein the formula shown in the specification is shown in the specification; and step 5, dividing the training process into a plurality of preset sub-training periods, and sequentially training the sub-training periods by adopting a step-by-step growth strategy to obtain the generator network. According to the method, the colorimage conforming to human subjective visual preferences is reconstructed from a black-white or gray level image by adopting an adversarial generation network, so that the color image is more realistic.

Description

technical field [0001] The invention relates to the field of image coloring and enhancement, in particular to an image coloring method based on self-attention generation confrontation network. Background technique [0002] Image coloring is a basic means of image enhancement, which aims to supplement color information for grayscale images without any color hints, so as to obtain a better and more complete look and feel and visual experience. With the development of the times, color images and videos have become a common experience for ordinary consumers. Compared with the early pictures and video materials that only had black and white or grayscale information, they are more colorful. However, due to the lack of color information, the old black and white or grayscale material is extremely difficult to restore to the more experienced color material for modern audiences. Furthermore, with the leap of hardware technology, in order to obtain a better visual experience, people's...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T7/90G06T11/00G06T7/11G06T3/60G06N3/08G06N3/04
Inventor 薛雨阳李根童同高钦泉
Owner 福建帝视信息科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products