Video generation model training method, video generation method and device

A technology for generating models and videos, applied in the field of artificial intelligence, can solve problems such as poor quality of fitting videos, discontinuous space and time, and inconsistencies in time and space of fitting videos, and achieve the effect of stable and non-jittering images and improved image quality

Pending Publication Date: 2022-06-17
BEIJING QIYI CENTURY SCI & TECH CO LTD
View PDF0 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Then, the synthetic fitting images output by the Try-on module for each video frame are spliced ​​into a fitting video. The spliced ​​fitting video has the problem of inconsistency in time and space, that is, the space in the fitting video is not continuous in time, and the generated The fitting video is jittery, and the background area in the generated fitting video is inconsistent with the clothing area, resulting in poor quality of the generated fitting video

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video generation model training method, video generation method and device
  • Video generation model training method, video generation method and device
  • Video generation model training method, video generation method and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0072] The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application.

[0073] like figure 1 As shown, the embodiment of the present application provides a video generation model training method, the method can be applied to electronic equipment, and the electronic equipment can be a smart phone, a tablet computer, a desktop computer, a server and other equipment, and the method includes:

[0074] S101. Obtain a sample fitting video and a sample clothes image.

[0075] Among them, the sample fitting video is a video shot by a person wearing sample clothes.

[0076] S102, extract a pose image sequence and a background image sequence from the sample fitting video.

[0077] Wherein, the posture image sequence includes the posture information of the characters in each video frame of the sample fitting video, and the background image sequence includes the imag...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The embodiment of the invention provides a video generation model training method and device, and a video generation method and device, and relates to the technical field of artificial intelligence, and the specific implementation scheme is as follows: obtaining a sample fitting video and a sample clothes image, extracting a posture image sequence and a background image sequence from the sample fitting video, performing deformation processing on the sample clothes image according to the posture information of the figure in each video frame of the sample fitting video to obtain a deformed clothes image sequence, and inputting the deformed clothes image sequence, the posture image sequence and the background image sequence into a video generation model to obtain a synthesized fitting video output by the video generation model, and calculating a loss function value according to the synthesized fitting video and the sample fitting video, adjusting video generation model parameters based on the loss function value until the video generation model converges, and determining that the training of the video generation model is completed. The fitting video image generated based on the video generation model is stable and does not shake, and the background area is coordinated.

Description

technical field [0001] The present application relates to the technical field of artificial intelligence, and in particular, to a video generation model training method, a video generation method, and an apparatus. Background technique [0002] With the development of online e-commerce platforms, the virtual fitting technology that simulates the clothes selected by the user on the human body can enhance the user's shopping experience. Since the 3D virtual fitting solution needs to consume a lot of computing resources, the 2D virtual fitting solution needs to consume a lot of computing resources. It is the mainstream research direction in this field. [0003] At present, the Try-on modules in each 2D virtual fitting scheme generate virtual fitting videos in a frame-by-frame manner. That is to say, after obtaining the character video uploaded by the user, Each time the deformed clothes image and a video frame of the character video are input into the Try-on module, and then t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06N20/00G06Q30/06
CPCG06N20/00G06Q30/0643
Inventor 蒋剑斌王倓
Owner BEIJING QIYI CENTURY SCI & TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products