Image color conversion device, method and storage medium based on generative adversarial network

A technology of color transformation and picture, applied in biological neural network model, graphics and image conversion, image enhancement, etc., can solve the problem of detection effect of few defect samples, etc., achieve less training data, good practicability, and reduce development labor costs and time cost effects

Active Publication Date: 2022-02-15
ZHEJIANG LINYAN PRECISION TECH CO LTD
View PDF18 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The purpose of the present invention is to provide a picture color conversion device, method and storage medium based on generative confrontation network, aiming to solve the problems of few defect samples and poor detection effect of samples of the same model and different colors in industrial defect detection, and realize the ability to generate defects The picture can generate pictures of defective parts in different colors

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image color conversion device, method and storage medium based on generative adversarial network
  • Image color conversion device, method and storage medium based on generative adversarial network
  • Image color conversion device, method and storage medium based on generative adversarial network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0070] A device for image color transformation based on generative confrontation networks, including a data collection module, a training module, and a transformation module, the data collection module is used to collect part image data and form a training data set, and random colors are added to the training data set The training picture of noise; the training module is used to adopt the training data set to train the network model to obtain the trained network model; the transformation module is used to input the picture to be converted into the trained network model and output the picture after color transformation.

[0071] Such as image 3 As shown, the network model includes a generator and a discriminator, the generator is used to generate generated pictures conforming to the distribution of training data, and the generated pictures and real pictures are respectively input to the discriminator for training, and the discriminator is used to score the generated pictures ,...

Embodiment 2

[0074] This embodiment is optimized on the basis of embodiment 1, such as figure 1 As shown, the generator blocks of the front-end generation block and the back-end generation block include several feature generation layers and coloring layers arranged in sequence from front to back, and the feature generation layers respectively include convolutional layers and coloring layers connected in sequence from front to back. LeakyRelu activation function layer, the convolutional layer is used to fuse the input and externally extracted style information, and the coloring layer is used to fuse the input and externally extracted color information; the feature generation layer of the previous generator block is upsampled As the input of the feature generation layer of the next generator block, the output of the shading layer of the last generator block of the back-end generation block is added to the output of the feature generation layer and output after the upsampling layer to generate...

Embodiment 3

[0079] This embodiment is optimized on the basis of embodiment 1 or 2, as figure 2 As shown, the discriminator includes a discriminator block arranged in series from front to back, and the discriminator block includes several layers of residual blocks arranged in series from front to back, and the residual block includes residual convolution and convolution A product block, the residual convolution is used to extract the input residual information, the output of the convolution block is added to the output of the residual convolution and input to the downsampling layer to obtain the output of the residual block.

[0080] Further, the convolutional block consists of a double nesting of a convolutional layer and a LeakyRelu activation layer.

[0081] Other parts of this embodiment are the same as those of Embodiment 1 or 2 above, so details are not repeated here.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a picture color conversion device, method and storage medium based on a generative confrontation network, collects a training data set, uses the training data set to train a network model to obtain a trained network model, and inputs the picture to be converted into the trained network model And output the picture after color transformation. The network model includes a generator and a discriminator. The generator includes a front-end generation block and a back-end generation block connected sequentially from front to back. The front-end generation block is used to extract external style information and form a feature map and undergo upsampling Then input the back-end production block; the back-end production block is used to inject color histogram information into the feature map, and output the generated picture after coloring and up-sampling. The invention can migrate the color domain for the part structure in the sampling device, generate part pictures with the same structure but different colors, and provide them to the defect detection network for subsequent processing, thus having better practicability.

Description

technical field [0001] The invention belongs to the technical field of component defect detection, and in particular relates to a picture color transformation device, method and storage medium based on a generative confrontation network. Background technique [0002] In the current domestic industrial defect detection industry, in addition to using traditional graphics methods for defect detection, most competitive companies are trying to use artificial intelligence computer vision methods to improve detection accuracy. Before the advent of the artificial intelligence era, the traditional methods used by most companies were often only sensitive to large defects, while small defects were difficult to identify. This traditional method is more popular in foreign countries where large factories produce exquisite molds, but it is difficult for most small domestic manufacturers to produce rough molds. In this environment, many relatively efficient artificial intelligence detectio...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06T7/90G06T3/00G06N3/04G06K9/62G06V10/774G06V10/82G06V10/764G06V10/56
CPCG06T7/90G06T3/0012G06T2207/20081G06T2207/20084G06N3/045G06F18/214
Inventor 张晓武陈斌李伟顾诚淳
Owner ZHEJIANG LINYAN PRECISION TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products