Audio-driven face animation generation method and system fused with emotion coding

A technology that drives human faces and emotions. It is applied in the field of artificial intelligence and can solve problems such as inability to judge emotional states, high algorithm complexity, and modal particles that cannot fully reflect the speaker's true emotional state.

Active Publication Date: 2021-09-10
ZHEJIANG LAB
View PDF3 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, on the one hand, modal particles cannot fully reflect the real emotional state of the speaker; on the other hand, if the sentence does not contain modal particles, the method cannot judge the emotional state
In addition, this method needs to extract AU parameters from the video at the same time and weight them with the AU parameters of audio prediction, so the complexity of the algorithm is high

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Audio-driven face animation generation method and system fused with emotion coding
  • Audio-driven face animation generation method and system fused with emotion coding
  • Audio-driven face animation generation method and system fused with emotion coding

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0044] In order to make the purpose, technical solution and technical effect of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings.

[0045] Such as figure 1 As shown, an audio-driven face animation generation method that incorporates emotion coding includes the following steps:

[0046] Step 1: Preprocess the audio signal and calculate the MFCC features;

[0047] In this embodiment, the sampling rate is set to 16000 Hz, the size of the sliding window is 0.02 s, and the step size of the sliding window is 0.02 s, so the frame rate of the extracted MFCC features is 50 fps.

[0048] Step 2: Input MFCC features into the speech recognition module to further extract audio features;

[0049] Because the object of the present invention is to carry out expression estimation for arbitrary audio frequency, therefore extract a generalized audio characteristic, first use speech recognition module to c...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention belongs to the field of artificial intelligence, and relates to an audio-driven face animation generation method and system fused with emotion coding, and the method comprises the steps: firstly carrying out the preprocessing of a collected audio signal, extracting MFCC features, inputting the features into a voice recognition module, further extracting audio features, inputting the MFCC features into a voice emotion recognition module, obtaining an emotion category and performing one-hot coding, then connecting audio features and one-hot coding vectors of emotions, inputting the connected vectors into an expression recognition module to obtain an expression coefficient based on a 3DMM model, and finally inputting the expression coefficient and a face template into a face animation generation module to obtain a 3D face animation with expressions. The method is small in calculation amount, stable in training, simple in process and low in cost, the movie making period and cost can be greatly reduced, the emotional state conveyed by voice is fully considered, emotional codes are input into the network, the generated face animation is more vivid, and better experience can be brought to a user.

Description

technical field [0001] The invention belongs to the field of artificial intelligence, and in particular relates to an audio-driven face animation generation method and system incorporating emotion coding. Background technique [0002] In recent years, with the continuous development of artificial intelligence, cross-modal learning and modeling techniques have attracted more and more attention in interdisciplinary research such as computer vision, computer graphics, and multimedia. Visual and auditory modalities are two important sensory channels in human-to-human or human-computer interaction. There is a strong correlation between audio and facial animation, that is, many facial movements are directly caused by speech production. Therefore, understanding the correlation between speech and facial movements can provide additional assistance in analyzing human behavior. Audio-driven facial animation technology has a wide range of application scenarios, such as virtual anchors...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06T13/40G06N3/04G06N3/08G10L15/02G10L15/06G10L15/16G10L25/30G10L25/63
CPCG06T13/40G06N3/08G10L15/02G10L15/063G10L15/16G10L25/30G10L25/63G06N3/044G06F18/214
Inventor 李太豪刘逸颖郑书凯刘昱龙马诗洁阮玉平
Owner ZHEJIANG LAB
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products