Video synthesis model training method and device, video synthesis method and device, storage medium, program product and electronic equipment

A technology of video synthesis and training method, which is applied in the computer field, can solve the problems of low training efficiency and video quality to be improved, and achieve the effects of improving training efficiency, reducing loss of part of feature information, and enhancing relevance

Inactive Publication Date: 2021-10-01
BEIJING CENTURY TAL EDUCATION TECH CO LTD
View PDF4 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The current multi-model separate training method needs to prepare multiple training sample data for each model, the training eff

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video synthesis model training method and device, video synthesis method and device, storage medium, program product and electronic equipment
  • Video synthesis model training method and device, video synthesis method and device, storage medium, program product and electronic equipment
  • Video synthesis model training method and device, video synthesis method and device, storage medium, program product and electronic equipment

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0040] Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. Although certain embodiments of the present disclosure are shown in the drawings, it should be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein; A more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for exemplary purposes only, and are not intended to limit the protection scope of the present disclosure.

[0041] It should be understood that the various steps described in the method implementations of the present disclosure may be executed in different orders, and / or executed in parallel. Additionally, method embodiments may include additional steps and / or omit performing illustrated steps. The scope of the present disclosure is not limited in this respect. ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a video synthesis model training method and device, a video synthesis method and device, a storage medium, a program product and electronic equipment, and the method comprises the steps: obtaining a sample text and a sample video which is a video that a real person reads the sample text; inputting the sample text into the speech synthesis sub-model to obtain a feature vector; inputting the feature vectors into a voice reconstruction human face sub-model to obtain human face feature parameters; inputting the face feature parameters and the sample video into the differentiable rendering sub-model to obtain a face feature map; and inputting the face feature map into the generative adversarial network sub-model to obtain a virtual real person video, and iteratively training the voice reconstruction face sub-model, the differentiable rendering sub-model and the generative adversarial network sub-model based on the virtual real person video and the sample video until a loss function value of the generative adversarial network sub-model meets a preset condition.

Description

technical field [0001] The present invention relates to the field of computer technology, in particular to a training method for a video synthesis model, a training device for a video synthesis model, a video synthesis method, a video synthesis device, and a computer program for realizing a training method for a video synthesis model or a video synthesis method non-transitory computer-readable storage media and electronic devices. Background technique [0002] At present, with the rapid development of deep learning technology, text-driven video generation technology has gradually become a research hotspot, which can be applied to weather broadcasting, news broadcasting, online education and other fields. [0003] In related technologies, text-driven video generation technology usually uses multiple models for separate training. For example, text-to-speech synthesis uses a speech synthesis sub-model for separate training, and speech-to-face reconstruction also uses a separate...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/62G06N3/08G10L13/02
CPCG10L13/02G06N3/08G10L2013/021G06F18/214
Inventor 郎彦高原刘霄
Owner BEIJING CENTURY TAL EDUCATION TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products