Method and system for realizing video and audio driven face animation combined with modal particle features

A technology that drives the face and realizes the system. It is used in animation production, speech analysis, speech recognition, etc. It can solve the problem of not taking into account the characteristics of the modal particles, and achieve the effect of vivid facial expressions.

Active Publication Date: 2022-05-17
SHANGHAI JIAOTONG UNIV
View PDF14 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The present invention requires a high degree of production cost and time period in the prior art, and the single-based video stream driving method and the audio driving method have their own disadvantages, and neither of them takes into account the defects of the characteristics of the modal particle, and proposes a method of combining the characteristics of the modal particle The face animation method and system driven by video and audio, by inputting the video content of the user's face and the audio content of the user's voice, can jointly drive the three-dimensional Avatar model in the virtual scene, and make the overall and partial facial animation on the basis of real-time driving more realistic and vivid performance

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method and system for realizing video and audio driven face animation combined with modal particle features
  • Method and system for realizing video and audio driven face animation combined with modal particle features

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0022] like figure 1 As shown, the present embodiment relates to a video-audio-driven facial animation implementation system that combines modal particle features, including: an openface video tracking module, a speech prediction module, a modal particle enhancement module and a visualization module, wherein: the openface video tracking module is based on Process video input information, perform facial position and pose calculations to obtain facial rotation angles and line-of-sight rotation angles, and perform expression AU parameter detection to obtain AU intensity parameters. The voice prediction module uses the extracted audio features to construct an audio feature matrix based on the processed voice input information. The memory network (LSTM) is used to predict the mapping relationship between the audio feature window and the facial AU parameters, that is, the expression AU parameters. The modal particle enhancement module converts the speech content into text, and furthe...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

A method and system for video-audio-driven face animation combined with modal particle features, constructing a speech feature matrix by extracting speech features, using modal particles to enhance the multi-layer convolution operation of the training network to sample the feature matrix and map it to a low-dimensional space intermediate variable; convert the input speech into text, identify the modal particle from the text content and construct a one-hot vector, and splice with the intermediate variable to obtain an intermediate variable containing the characteristics of the modal particle; After the product is mapped to the expression AU parameters of the current frame, it is used to fit the AU parameters generated by the video tracking and voice prediction algorithms and then used as the driving parameters of the face model to achieve expression enhancement. In the present invention, by inputting the video content of the user's face and the audio content of the user's voice, the three-dimensional Avatar model in the virtual scene can be jointly driven, and on the basis of real-time driving, the overall and partial facial animation can be more realistic and vivid. Effect.

Description

technical field [0001] The present invention relates to a technology in the field of computer graphics, in particular to a method and system for realizing video-audio-driven facial animation combined with features of modal particles. Background technique [0002] Existing implementations of facial expression animation include traditional interaction modeling and key frame animation methods, motion capture methods based on facial marker tracking, driving methods based on video stream images, and driving methods based on audio prediction. Among them, interactive modeling and key frame animation methods are widely used in games, 3D animation and other fields, and are the mainstream methods for producing high-precision 3D facial animation. This method has the advantages of high precision, mature technology, and is suitable for assembly line production, but it requires long-term settings and adjustments by modelers and animators, which is time-consuming and labor-intensive, and t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06T13/20G06T13/40G06V40/16G06F40/284G10L15/26G10L25/30
CPCG06T13/205G06T13/40G06F40/284G10L15/26G10L25/30G06V40/168G06V40/174
Inventor 李舜肖双九
Owner SHANGHAI JIAOTONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products