Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Virtual human animation synthesis method and system based on global emotion coding

An animation synthesis and virtual human technology, applied in voice analysis, voice recognition, instruments, etc., can solve problems such as inability to realize virtual human animation, inability to automatically extract input voice emotion, and insufficient emotional control effect of generated animation

Pending Publication Date: 2021-09-14
SHENZHEN GRADUATE SCHOOL TSINGHUA UNIV
View PDF16 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0011] In order to solve the technical problems of insufficient control effect of animation emotion generation, inability to automatically extract input voice emotion and generate emotion control, and inability to realize anti-noise of virtual human animation generation system, the present invention proposes a virtual human animation synthesis based on global emotion coding method and system

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Virtual human animation synthesis method and system based on global emotion coding
  • Virtual human animation synthesis method and system based on global emotion coding
  • Virtual human animation synthesis method and system based on global emotion coding

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0038] In order to have a clearer understanding of the technical features, purposes and effects of the present invention, the specific implementation manners of the present invention will now be described with reference to the accompanying drawings.

[0039] In the process of description, acronyms of key terms will be involved, which are explained and explained in advance here:

[0040] LSTM: Long Short-Term Memory, long short-term memory network, is an implementation of Recurrent Neural Network (RNN);

[0041] MFCC: Mel Frequency Cepstral Coefficient, Mel Frequency Cepstral Coefficient, is a feature commonly used in speech, which mainly contains information in the frequency domain of speech;

[0042] PPG: Phonetic Posterior grams, that is, the posterior probability of the phoneme, is an intermediate representation of the result of speech recognition, indicating the posterior probability that each frame of speech belongs to each phoneme;

[0043] GRU: Gated Recurrent Unit, a ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a virtual human animation synthesis method and system based on global emotion coding. The method comprises steps of converting an input voice feature into a phoneme posterior probability feature through employing a voice recognition model, obtaining a simulation noise sequence through employing a noise encoder, carrying out the summation of the simulation noise sequence and the phoneme posterior probability feature, obtaining phoneme posterior probability features with noise, and obtaining global content features through a full connection layer; extracting a Mel-frequency cepstrum coefficient feature sequence for emotional voice, extracting a global acoustic feature vector through a bidirectional gating loop unit network, setting an implicit vector matrix, and carrying out attention calculation on the global acoustic feature vector and the implicit vector to obtain a global emotional feature. The method is advantaged in that the global emotion features are spliced to the global content features, the context information is modeled through a bidirectional long-short-term memory network, face animation parameters corresponding to emotion and mouth shape information are generated, and finally virtual human animation with emotion is generated.

Description

technical field [0001] The invention relates to the field of speech processing, in particular to a virtual human animation synthesis method and system based on global emotion coding. Background technique [0002] At present, voice-driven virtual human animation generation has received a lot of research in the industry, and its practical value has been proved by a large number of landing application scenarios. The traditional voice-driven virtual human animation generation mainly focuses on the generated mouth shape effect, and pays less attention to the expression and emotion of the generated face. Emotional expressive virtual human animation generation is also widely used in actual scenes. On the one hand, emotional expressiveness can enhance the authenticity of virtual images, improve user interaction experience, and enhance user interaction willingness. In products such as assistants and virtual companions, compared with the original traditional method, it brings users a...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G10L15/02G10L15/06G10L15/16G10L15/183G10L15/25G10L15/26G10L19/012G10L25/24G10L25/63
CPCG10L15/063G10L15/02G10L15/16G10L15/183G10L15/25G10L15/26G10L19/012G10L25/24G10L25/63G10L2015/025
Inventor 吴志勇黄晖榕
Owner SHENZHEN GRADUATE SCHOOL TSINGHUA UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products