Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Video face fat and thin editing method

A video, fat and thin technology, applied in the field of video face fat and thin editing, can solve the problem of not being able to obtain face shape parameters, and achieve efficient and stable 3D face reconstruction

Active Publication Date: 2021-08-06
ZHEJIANG UNIV
View PDF7 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0014] However, none of the above video-based reconstruction methods can obtain unique face shape parameters, that is, the face shape parameters obtained in the first frame and the face shape parameters obtained in the last frame are not the same

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video face fat and thin editing method
  • Video face fat and thin editing method
  • Video face fat and thin editing method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0073] Such as figure 1 As shown, the video face fat and thin editing method includes the following steps:

[0074] (1) Reconstruct the 3D face model based on the face video, and generate the 3D face shape parameters in the face video and the facial expression parameters and face pose parameters of each video frame.

[0075] (1-1) Reconstruct the 3D face model based on the monocular vision 3D face reconstruction algorithm, and calculate the face pose parameters of each video frame in the face video; this step performs rigid pose estimation on the face video.

[0076] The monocular vision 3D face reconstruction algorithm adopts "A Multiresolution3D Morphable Face Model and Fitting Framework"

[0077] (Inthe11thJointConferenceonComputerVision, ImagingandComputerGraphicsTheoryandApplications (VISIGRAPP), Vol.4.79-86) discloses the method of the lowest resolution in the multi-resolution face three-dimensional model, and its specific steps are as follows:

[0078] (1-1-1) Reconst...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video face fat and thin editing method, which comprises the following steps of: reconstructing a three-dimensional face model based on a face video, and outputting a three-dimensional face shape parameter and a face expression parameter and a face posture parameter of each video frame; adjusting the three-dimensional face model based on a three-dimensional face fat and thin adjustment algorithm, and migrating an adjustment result to each frame to generate a deformed three-dimensional face model of each frame; establishing dense mapping of the face boundary before and after deformation on the two-dimensional plane by using a directed distance field, and adjusting the dense mapping based on the structure of the three-dimensional face model; and deforming the face video frame based on the dense mapping, reducing background distortion of the video frame caused by deformation by using energy optimization, obtaining a deformed face video frame, and correspondingly replacing the deformed face video frame back to the original face video. According to the method and the device, the face video conforming to the fat-thin scale is automatically generated, and an ideal result can still be obtained under the conditions of shielding, long hair, glasses wearing and the like.

Description

technical field [0001] The invention relates to the technical field of portrait editing, in particular to a video editing method for fat and thin faces. Background technique [0002] With the rapid development of social networks and media, more and more people are actively sharing personal videos and pictures on the Internet. Image editing techniques are usually used to create special facial effects, such as exaggeration and beautification of human faces. Current research focuses on editing the color, texture, and shape of human faces. [0003] "Deep Shapely Portraits" (In MM′20: The 28th ACM International Conference on Multimedia.2020.1800–1808) discloses an image-based automatic fat and thin editing method, using neural networks to automatically identify the most suitable fat and thin scale for a specific person, and using The technique of rendering fusion deforms the image. However, this technique is unstable in the temporal domain and cannot handle side faces well. ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T19/20G06T17/00
CPCG06T19/20G06T17/00G06T2200/04
Inventor 唐祥峻孙文欣金小刚
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products