Video synthesis method, model training method and related device

一种视频合成、视频的技术,应用在彩色电视的零部件、电视系统的零部件、电视等方向,能够解决连贯性不好、视频连续性差等问题

Active Publication Date: 2019-06-28
TENCENT TECH (SHENZHEN) CO LTD
View PDF8 Cites 37 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] However, the coherence of motion transfer for each frame of the action sequence is not good, which will lead to poor continuity of the synthesized video in the temporal domain.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video synthesis method, model training method and related device
  • Video synthesis method, model training method and related device
  • Video synthesis method, model training method and related device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0116] The embodiment of the present application provides a video synthesis method, a model training method, and related devices, which can use multi-frame source image information to generate an output image corresponding to an action sequence, thereby fully considering the information relevance between consecutive frames, The temporal continuity of the synthesized video is thereby enhanced.

[0117] The terms "first", "second", "third", "fourth", etc. (if any) in the specification and claims of the present application and the above drawings are used to distinguish similar objects, and not necessarily Used to describe a specific sequence or sequence. It is to be understood that the data so used are interchangeable under appropriate circumstances such that the embodiments of the application described herein, for example, can be practiced in sequences other than those illustrated or described herein. Furthermore, the terms "comprising" and "corresponding to" and any variations...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a video synthesis method. The video synthesis method comprises: acquiring K frames of source image information of a first to-be-synthesized video, each frame of image information comprises a source image and a source action key point, the source image and the source action key point have a corresponding relation, multi-frame target image information of a second to-be-synthesized video is acquired, and each frame of target image information comprises a target action key point; obtaining a first output image corresponding to the K frames of source image information and the first target action key point through a video synthesis model, wherein the video synthesis model is used for carrying out fusion processing on the source image, the source action key point and the target action key point; and generating a composite video according to the action reference sequence and the first output image. The invention further discloses a model processing method and device. According to the method, the output image corresponding to the action sequence can be generated by utilizing the multi-frame source image information, so that the information relevance between continuous frames is fully considered, and the time domain continuity of the synthesized video is enhanced.

Description

technical field [0001] The present application relates to the field of artificial intelligence, in particular to a video synthesis method, a model training method and related devices. Background technique [0002] With the diversity of media forms, a strategy for migrating the actions of characters in different videos is proposed. For two videos, one is the target person whose action is to be synthesized, and the other is the source person of the transferred action. Motion transfer between characters through a pixel-based end-to-end process. Video motion transfer can enable untrained amateurs to perform dance moves like professional ballerinas and dance like pop stars. [0003] At present, in the video action transfer method, the commonly used processing method is to first provide two videos, one video includes the transfer object, and the other video includes the action reference sequence, and then performs a single-step operation on each frame of the video according to t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): H04N5/272H04N5/265H04N5/262G06K9/00G06K9/62
CPCH04N5/265G06V20/47G06V10/449G06V10/806G06F18/253G09G5/377G09G2340/10G06F18/25
Inventor 黄浩智成昆袁春刘威
Owner TENCENT TECH (SHENZHEN) CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products