Audio-driven face animation generation method and device, equipment and medium

A technology for driving face and animation, applied in the field of artificial intelligence, it can solve the problems of the complexity of the face image generation process, and achieve the effect of improving the generalization ability, reducing the production cost, and strengthening the generalization ability.

Active Publication Date: 2021-12-24
ZHEJIANG LAB
View PDF6 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

It is limited to the need for presets, and the generation process of face images has a certain complexity

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Audio-driven face animation generation method and device, equipment and medium
  • Audio-driven face animation generation method and device, equipment and medium
  • Audio-driven face animation generation method and device, equipment and medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0037] In order to make the object, technical solution and technical effect of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments.

[0038] Such as figure 1 As shown, a cross-language audio-driven face animation generation method includes the following steps:

[0039] Step 1: collect speech signals, extract MFCC features, and input them into a phoneme recognizer to obtain phoneme classification probabilities of speech.

[0040] In this embodiment, the sampling rate is set to 8000Hz for the collected audio signal, and the sliding window size is set to 0.025s, the sliding window step is 0.01s, and the cepstrum number is 40, the MFCC features are extracted, and the obtained MFCC features Every 3 are stacked, and the length of the MFCC feature obtained for each frame is 120, and then input into the phoneme recognizer for phoneme recognition.

[0041] The output of the phonem...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an audio-driven face animation generation method and device, equipment and a medium, and the method comprises the following steps: 1, collecting a voice signal, extracting MFCC features, inputting the MFCC features into a phoneme recognizer, and obtaining the phoneme classification probability of voice; 2, inputting the phoneme classification probability into an Embedding layer to obtain an Embedding code of the phoneme; 3, inputting the Embedding codes of the phonemes into the expression predictor to obtain vertex displacement of the 3D face; 4, adding the vertex displacement of the 3D face and the face template of the natural expression to obtain a 3D face with a speaking expression; and 5, rendering the 3D face in the continuous time into a 2D image, and generating an animation video. Pronunciation and facial expressions are directly associated, more than 2000 languages in the world can be recognized, higher generalization ability is achieved, meanwhile, dubbing can be conducted on animations of different languages, and the animation production cost is greatly reduced.

Description

technical field [0001] The invention belongs to the field of artificial intelligence, and relates to an audio-driven face animation generation method, device, equipment and medium. Background technique [0002] Audio-driven facial animation generation covers speech processing, computer graphics, computer vision, multimedia and other disciplines. In recent years, with the continuous development of artificial intelligence and multimedia technology, virtual digital human technology has received widespread attention, and audio-driven 3D facial animation, as an important part of it, has also received more and more attention. Audio-driven facial animation technology can greatly simplify the production of 3D character animation, match animation with dubbing audio tracks, and easily complete animation character production for games, movies and real-time digital assistants; it can be used in interactive real-time application scenarios, traditional facial animation creation tools Wai...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T13/20G06T13/40
CPCG06T13/40G06T13/205
Inventor 刘逸颖李太豪郑书凯阮玉平
Owner ZHEJIANG LAB
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products