Method for improving trueness of marine scene simulation picture

A technology of scene simulation and realism, applied in neural learning methods, image enhancement, image analysis, etc., can solve the problem of sample scarcity

Active Publication Date: 2021-03-26
HARBIN ENG UNIV
12 Cites 2 Cited by

AI-Extracted Technical Summary

Problems solved by technology

[0003]Aiming at the above-mentioned prior art, the technical problem to be solved by the present invention is to provide a method for improving the authenticity of sea scene simulation pictures, which can use simulation ...
View more

Abstract

The invention discloses a method for improving the trueness of a marine scene simulation picture, and the method comprises the steps: carrying out the foreground and background segmentation of a simulation picture, carrying out the style migration through employing a conventional method and a deep learning method CycleGAN, and achieving the conversion from the simulation picture to a real marine picture. The background adopts a Poisson fusion and color conversion method, a real sea surface photo is used as a sub-image, a simulation image is used as a mother image, Poisson fusion is carried out, then Reinhard color migration is carried out to obtain a vivid background, the foreground adopts a CycleGAN algorithm, and each layer of convolution of a generator is subjected to point multiplication with a mask to extract a foreground part; and the splicing with the input layer at the last layer is performed to reserve the background information of the original image so as to generate a complete sea surface image with a real style. Simulation software is used for constructing a marine scene, the style of a real picture is converted after a simulation picture is obtained, the method is usedfor neural network training, and the problem of sample scarcity is solved.

Application Domain

Image enhancementImage analysis +4

Technology Topic

Computer visionEcology +8

Image

  • Method for improving trueness of marine scene simulation picture
  • Method for improving trueness of marine scene simulation picture
  • Method for improving trueness of marine scene simulation picture

Examples

  • Experimental program(1)

Example Embodiment

[0065] Example:
[0066] 1. Prepare the data set
[0067] This method needs to prepare a total of three data sets, (1) the sea scene simulation picture data set Train_CG and its labels, which require the labels to divide the picture into three parts: the sky, the sea surface, and the foreground target. (2) Prepare the real sea scene photo dataset Train_real and its labels, and require the labels to divide the photos into foreground and background parts. (3) The sea surface photo dataset Train_sea without targets.
[0068] 2. Randomly select a sample image from Train_sea, and use the region growing algorithm to segment it.
[0069] 3. According to the semantic label of Train_CG and the segmentation result of Train_sea, it detects the sea antenna.
[0070] Randomly select a picture from each of Train_CG and Train_sea, and perform multiple samplings on the contact points between the sea surface and the sky in the two segmentation pictures to obtain a set of sampling point samples, and remove the noise points, and then the samples The coordinates are sent to the one-dimensional linear classifier for training, and the fitted straight line is obtained, which is the detected sea antenna.
[0071] 4. Align the two selected pictures according to the detected sea antennae.
[0072] First, according to the slope of the sea antenna detected in the two pictures, rotate the sea surface picture selected in Train_sea accordingly to keep the slope consistent, then align the sea antenna positions of the two pictures, and adjust the size of the sea surface picture, Cut out the part that exceeds the simulation image. Get two simulated pictures CG and sea photo Sea with the same sea antenna at the same position and the same size.
[0073] 5. Perform Poisson fusion of the background part of the simulation image CG obtained in step 4 and the sea surface photo Sea.
[0074] According to the label picture of the simulation image, the mask image of the simulation image is obtained, the foreground part is 0, and the background part is 255. The simulated image CG is used as the parent image, the sea surface photo is used as the sub-image, and the mask image of the simulated image is used as the mask, and Poisson fusion is performed to obtain a simulated image with real sea surface texture.
[0075] 6. Change the color style of the result picture obtained in step 5 through the Reinhard color migration algorithm.
[0076] Convert the result map in step 5 and the sea surface photo Sea to the Lab color channel respectively. Calculate the mean and variance of the three channels separately, calculate each channel in the source image with the formula shown below, and convert the calculation results back into the RGB color space to obtain the color-converted picture.
[0077]
[0078] Among them, p is the source image, m 1 is the mean value of the source image, m 2 is the mean value of the target image, d 1 is the variance of the source image, d 2 is the variance of the target image.
[0079] 7. Repeat steps 2-6 to perform Poisson fusion and color migration on all the image backgrounds in the simulation image dataset Train_CG to obtain a new dataset Train_cg.
[0080] 8. Pass the foreground of the datasets Train_cg and Train_real through the CycleGAN algorithm to transfer the style of the foreground part.
[0081] Among them, the CycleGAN algorithm is a non-paired image translation algorithm based on deep learning. Two different styles of data set pictures are sent to the network for training at the same time to realize the conversion from one style to another. Its essence is two mirror-symmetrical GANs (generated against the network), forming a ring network with two generators G AB , G BA and two discriminators D A 、D B , in the present invention, the generator structure is as attached image 3 As shown, the residual network is used, and the input of each convolutional layer is multiplied by its corresponding size mask, that is, only the foreground part is generated, and the input of the network and the feature map are combined before the last convolutional layer Splicing is performed to preserve the information of the background part, so that the network can output a complete migrated picture. A complete CycleGAN can be split into two identical unidirectional networks, which are reverse processes.
[0082] For the one-way process of generating the real graph style from the simulation graph, it can be divided into two stages: discriminator training and generator training. In the discriminator training stage, the fixed generator G AB Parameters, extract picture a from the data set Train_cg, and get its mask map from its label, where the foreground part is set to 1, and the background part is set to 0. By multiplying its mask map by point, the foreground is extracted, and the background part is set to Black, send the dot product result to the generator G AB , generate a fake real photo-style picture fake_b, and send fake_b and the photo b taken from Train_real to the discriminator D B The discriminator is trained. When the input is fake_b, the output of the discriminator is calculated with 0 to calculate the cross entropy. When the input is b, the output of the discriminator is calculated with 1 to calculate the cross entropy. Add the two cross entropy results to obtain the loss of the discriminator to guide the discrimination. Device D B The training of the generator; the training process of the generator is shown in Figure 2(a) and Figure 2(b), and the fixed discriminator D B Parameters, send fake_b to D B Discriminate in, and multiply the discriminant result by the mask of picture a and calculate the cross-entropy loss with 1 to get L GAN (G AB ,D B , A, B), multiply fake_b and a respectively (1-a mask map) to extract the background of the two, and calculate the L1 loss||G AB (a) b -a b || 1 , passing fake_b through the generator G BA Get rec_a, and find L1 loss for the prospect of rec_a and the prospect of a ||G BA (G AB (a) f )-a f || 1 , put a into the generator G BA Then get idt_b and a to find L1 loss||G BA (a)-a|| 1 , add appropriate weights to the above four losses as G AB Loss. The reverse process is similar. The two generators adopt joint training. The total loss function of the positive and negative process is as follows, and the total loss function is used to guide the training of the two generators.
[0083] L(G AB , G BA ,D A ,D B ) = L GAN (G AB ,D B ,A,B)+L GAN (G BA ,D A , B, A)+αL cyc (G AB , G BA )+βL idt (G AB , G BA )+γL back (G AB , G BA , A, B)
[0084] in:
[0085] L GAN (G AB ,D B , A, B)=E[log D B (b f )]+E[log(1-D B (G AB (a) f ))]
[0086] L GAN (GBA ,D A , B, A)=E[log D A (a f )]+E[log(1-D A (G BA (b) f ))]
[0087] L cyc (G AB , G BA )=E[||G BA (G AB (a) f )-a f || 1 ]+E[||G AB (G BA (b) f )-b f || 1 ]
[0088] L idt (G AB , G BA , A, B)=E[||G BA (a)-a|| 1 ]+E[||G AB (b)-b|| 1 ]
[0089] L back (G AB , G BA , A, B)=E[||G AB (a) b -a b || 1 ]+E[||G BA (b) b -b b || 1 ]
[0090] A represents the simulated image data set, B represents the real photo data set; a represents the pictures in the simulated image data set, b represents the pictures in the real image data set; G AB A generator for generating photo-realistic images from simulated images, G BA A generator that generates a simulated picture style from real photos for its reverse process, D A It is a discriminator for judging whether it is a real graph, D B It is a discriminator for judging whether it is a simulation image; the subscript f indicates extracting the foreground by dot multiplication with the mask, and the subscript b indicates extracting the background by dot multiplication with (1-mask image); E represents expectation. The value of α is 500, the value of β is 250, and the value of γ is 1.
[0091] 9. Send the datasets Train_CG and Train_real to the semantic segmentation network deeplab v3+ for training, and save the trained parameters.
[0092] 10. After completing the above steps, the semantic segmentation results can be replaced with labels, and any simulation image can be used to go through steps 2-6, and then the generator G trained in CycleGAN can be used. AB To get real-style pictures and realize fast conversion in batches.

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products