Video generation method, device, electronic equipment and storage medium

A video and original video technology, applied in the field of video processing, can solve the problems of easy loss of source domain character feature details, insufficient resolution, easy loss of character feature details, etc., to achieve the effect of enhancing timing consistency

Active Publication Date: 2021-12-21
TSINGHUA UNIV +1
View PDF3 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, related technologies tend to lose the feature details of the source domain characters when generating portraits based on pose information, and it is difficult to ensure the consistency of longer time series in the generated videos
[0004] To sum up, there are problems in the video generation scheme in the prior art that the resolution is not high enough, and it is easy to lose the details of character features.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video generation method, device, electronic equipment and storage medium
  • Video generation method, device, electronic equipment and storage medium
  • Video generation method, device, electronic equipment and storage medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0097] Embodiment 1. Generation of each frame of video

[0098] like figure 2 Shown is the video generation method provided by the embodiment of the present disclosure, wherein the generation of the target image can have the following steps:

[0099] Step 201, acquiring posture information and appearance information of a first object, and posture information of a second object.

[0100]In the specific implementation, the pre-trained key point detection model is used to extract the pose and generate pose information. First, the key points of the human body in the video are located, and then the key points are connected according to the joints of the human body. Finally, the key points containing each key point are obtained. Human skeleton image to complete the acquisition of posture information. Wherein, the positioning of the key points includes the positioning of the face, the human hand, and each joint of the human body. For each key point, its corresponding two-dimension...

Embodiment 2

[0119] Embodiment 2. Optimization of Video Timing Information

[0120] like image 3 Shown is the video generation method provided by the embodiment of the present disclosure, wherein the optimization of video timing information may have the following steps:

[0121] Step 301, forming a preset time sequence based on the posture change sequence of the transferred second object, and connecting each frame of images generated at the preset time sequence to obtain the target object of the target object including the appearance characteristics of the first object and the posture characteristics of the second object video.

[0122] Step 302, based on the posture information and appearance information of the previous frame image of the current frame, and the posture information of the current frame image, determine the optical flow information between the previous frame image of the current frame and the current frame image, wherein the optical flow information The calculation of i...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The embodiment of the invention provides a video generation method, a device, electronic equipment and a storage medium, which are used for improving the resolution of an image in a video and effectively retaining feature details of the video. The method comprises the following steps: collecting an original video, and extracting attitude information and appearance information of a first object from the original video; determining attitude difference information between the attitude information of the first object and attitude information of a pre-collected second object, and determining feature information for representing appearance features of the first object based on the appearance information; generating multiple frames of initial images with first object appearance features and first object attitude features based on the feature information, migrating the attitude of the second object to each frame of initial image based on the attitude difference information, and generating multiple frames of target images with first object appearance features and second object attitude features; and forming a preset time sequence based on the posture change sequence of the migrated second object, and connecting the multiple frames of target images according to the preset time sequence to generate a target video.

Description

technical field [0001] The present disclosure relates to the field of video processing, and in particular to a video generation method, device, electronic equipment and storage medium. Background technique [0002] The intelligent generation of video content is used to simulate and reproduce the dynamic visual world, and has a wide range of applications in the fields of computer vision, robotics, and computer graphics. Using the learned video content intelligent generation model, users can generate realistic video content, and can perform advanced control and modification of video generated content in a conditionally constrained manner. [0003] Video generation solutions in the prior art mainly include intelligent generation of unconditional video content and intelligent generation of conditional video content. Among them, the intelligent generation of unconditional video content cannot control and modify the content of the generated video, and it is often difficult to obta...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): H04N5/262G06N3/08G06N3/04
CPCH04N5/262G06N3/08G06N3/045
Inventor 张慧李铮李强张文波
Owner TSINGHUA UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products