Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method and system for realizing video and audio driven face animation by combining modal particle characteristics

A technology that drives the face and realizes the system. It is applied in speech analysis, animation production, speech recognition, etc. It can solve the problem of not taking into account the characteristics of the modal particles, and achieve the effect of vivid facial expressions.

Active Publication Date: 2021-04-06
SHANGHAI JIAO TONG UNIV
View PDF14 Cites 11 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The present invention requires a high degree of production cost and time period in the prior art, and the single-based video stream driving method and the audio driving method have their own disadvantages, and neither of them takes into account the defects of the characteristics of the modal particle, and proposes a method of combining the characteristics of the modal particle The face animation method and system driven by video and audio, by inputting the video content of the user's face and the audio content of the user's voice, can jointly drive the three-dimensional Avatar model in the virtual scene, and make the overall and partial facial animation on the basis of real-time driving more realistic and vivid performance

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method and system for realizing video and audio driven face animation by combining modal particle characteristics
  • Method and system for realizing video and audio driven face animation by combining modal particle characteristics

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0022] Such as figure 1 As shown, the present embodiment relates to a video-audio-driven facial animation implementation system that combines modal particle features, including: an openface video tracking module, a speech prediction module, a modal particle enhancement module and a visualization module, wherein: the openface video tracking module is based on Process video input information, perform facial position and pose calculations to obtain facial rotation angles and line-of-sight rotation angles, and perform expression AU parameter detection to obtain AU intensity parameters. The voice prediction module uses the extracted audio features to construct an audio feature matrix based on the processed voice input information. The memory network (LSTM) is used to predict the mapping relationship between the audio feature window and the facial AU parameters, that is, the expression AU parameters. The modal particle enhancement module converts the speech content into text, and fur...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a method and system for realizing video and audio driven face animation by combining modal particle characteristics, and the method comprises the steps: constructing a voice feature matrix through the extraction of voice features, carrying out the sampling of the feature matrix through the multilayer convolution operation of a modal particle enhancement training network, and mapping the feature matrix to an intermediate variable of a low-dimensional space; converting the input voice into characters, identifying a mood word from the character content, constructing a onehot vector, and splicing the onehot vector with an intermediate variable to obtain an intermediate variable containing mood word features; carrying out the convolution through a modal particle enhancement training network, and then obtaining an expression AU parameter of a current frame through mapping, wherein the parameter is used for being fitted with an AU parameter generated by video tracking and voice prediction algorithms to serve as a driving parameter of a face model, so expression enhancement is realized. According to the invention, the three-dimensional Avatar model in the virtual scene can be jointly driven by inputting the video content of the face of the user and the audio content of the sound production of the user, and on the basis of real-time driving, the whole and local face animations are enabled to obtain more realistic and vivid expression effects.

Description

technical field [0001] The present invention relates to a technology in the field of computer graphics, in particular to a method and system for realizing video-audio-driven facial animation combined with features of modal particles. Background technique [0002] Existing implementations of facial expression animation include traditional interaction modeling and key frame animation methods, motion capture methods based on facial marker tracking, driving methods based on video stream images, and driving methods based on audio prediction. Among them, interactive modeling and key frame animation methods are widely used in games, 3D animation and other fields, and are the mainstream methods for producing high-precision 3D facial animation. This method has the advantages of high precision, mature technology, and is suitable for assembly line production, but it requires long-term settings and adjustments by modelers and animators, which is time-consuming and labor-intensive, and t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T13/20G06T13/40G06K9/00G06F40/284G10L15/26G10L25/30
CPCG06T13/205G06T13/40G06F40/284G10L15/26G10L25/30G06V40/168G06V40/174
Inventor 李舜肖双九
Owner SHANGHAI JIAO TONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products