Video style conversion method based on self-encoding structure and gradient order preserving

A style conversion and self-encoding technology, which is applied in image coding, graphics and image conversion, image data processing, etc., can solve problems affecting the smoothness and coherence of stylized video, unreasonable calculation methods of time consistency, and poor accuracy of optical flow estimation. In order to improve the visual sensory experience, suppress flickering and jittering, and reduce errors

Active Publication Date: 2019-12-03
XIDIAN UNIV
View PDF4 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although this method has achieved great success in improving the efficiency of video generation, there are still two defects in this method when converting video styles: First, there are halos around the foreground objects in the generated stylized video, which affects Human visual sensory experience; second, due to the lack of accuracy of optical flow estimation, the calculation method of temporal consistency loss is unreasonable, and the optical flow data detected on the original video frame is not suitable for constraining stylized video frame intervals. time consistency, resulting in training errors, affecting the smoothness and coherence of the stylized video

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video style conversion method based on self-encoding structure and gradient order preserving
  • Video style conversion method based on self-encoding structure and gradient order preserving
  • Video style conversion method based on self-encoding structure and gradient order preserving

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0040] The present invention will be further described in detail below in conjunction with the accompanying drawings and specific embodiments.

[0041] refer to figure 1 , the implementation steps of the present invention are as follows:

[0042] Step 1) Construct training sample set and test sample set:

[0043] (1a) Get target style images s and M r resolution size N r ×N r video data, and for each video data at frame rate N f Deframe and get M r Group the original video frame x, and extract the optical flow data of each video data at the same time, get M r group of optical flow data, where N r ≥64, M r ≥100,N f ≥25;

[0044] (1b) will 4M r / 5 sets of raw video frames x and 4M r / 5 sets of optical flow data corresponding to the original video frames form the training set, and the target style image s and the training set form the training sample set, and the remaining 1M r / 5 groups of original video frames x form a test sample set;

[0045] In existing video d...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a video style conversion method based on a self-encoding structure and gradient order preserving, which is used for solving the technical problem that halo is generated at the edge of a foreground target in a stylized video in an existing video style conversion method. The method comprises the following steps: 1) constructing a training sample set and a test sample set; 2) constructing a video stylized network model; 3) training the video stylized network model; 4) testing the trained video stylization network model; and 5) obtaining a video style conversion result. According to the invention, through constructing a video stylization network model based on a self-encoding structure and a gradient order-preserving loss function, the time consistency constraint is redefined in a more reasonable mode, halo generated by the foreground target edge in the stylized video is effectively eliminated, texture detail information of the original video is reserved, the visualsense experience of people is improved, and the method can be used for completing later manufacturing and processing of photography and movie and television works.

Description

technical field [0001] The invention belongs to the technical field of digital image processing, and relates to a video style conversion method, in particular to a video style conversion method based on self-encoding structure and gradient order preservation, which can be used to complete post-production processing of photography and film and television works. Background technique [0002] An important branch in the field of computer vision is image generation, which includes image super-resolution, image coloring, image semantic segmentation, image or video style conversion, etc. In the field of computer vision, image or video style transfer is considered as a general problem of texture synthesis, that is, in the case of a specified style image, the texture is extracted and transferred from source to target, and the corresponding style transfer result is generated. [0003] Image stylization transfer methods can be divided into two categories: image style transfer methods b...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T3/00G06T9/00
CPCG06T3/0012G06T9/002
Inventor 牛毅郭博嘉李甫李宜烜石光明
Owner XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products