Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Modeling and controlling method for synchronizing voice and mouth shape of virtual character

A virtual character and modeling method technology, applied in the field of speech synthesis, can solve the problems of relying on data volume and labeling, and achieve the effects of small labeling workload, strong interpretability, and clear physical meaning of the model

Active Publication Date: 2018-08-24
北京灵伴未来科技有限公司
View PDF13 Cites 28 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, this type of method relies on a large number of lip animations as training data, and relies heavily on data volume and labeling work

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Modeling and controlling method for synchronizing voice and mouth shape of virtual character
  • Modeling and controlling method for synchronizing voice and mouth shape of virtual character
  • Modeling and controlling method for synchronizing voice and mouth shape of virtual character

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0039] Embodiments of the invention will be described below, but it should be appreciated that the invention is not limited to the described embodiments and that various modifications of the invention are possible without departing from the basic idea. The scope of the invention is therefore to be determined only by the appended claims.

[0040] Such as figure 1 As shown, a mouth shape modeling method includes the following steps:

[0041] Step 1: Divide the speech phonemes into different phoneme categories.

[0042] Generally, phonemes can be divided into vowel phonemes and consonant phonemes; for vowel phonemes, vowel phonemes are divided into several vowel phoneme categories according to opening degree and lip shape; for consonant phonemes, consonant phonemes are divided into several consonant phoneme categories according to the articulation position phoneme category. The method classifies phonemes based on their pronunciation features, which are attributes that are univ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention belongs to the virtual character attitude control in the field of speech synthesis, and particularly relates to a modeling and controlling method for synchronizing the voice and the mouth shape of a virtual character. The object of the invention is to reduce the mouth shape animation data annotation amount and to achieve accurate and naturally smooth mouth motion synchronized with the voice. The method comprises: generating a phoneme sequence corresponding to the to-be-synchronized voice; converting the phoneme sequence into a phoneme category sequence; converting the phoneme category sequence into a static mouth shape configuration sequence; and converting the static mouth shape configuration sequence distributed on a time axis into dynamically changing mouth shape configuration by a dynamic model; rendering the dynamically changing mouth shape configuration into an attitude image of the head and neck of the virtual character, and displaying the attitude image in synchronization with a voice signal. The method can realize efficient and natural virtual character mouth shape synchronous control without mouth shape animation data and with a phonetic prior knowledge anddynamic model.

Description

technical field [0001] The invention belongs to the posture control of a virtual character in the field of speech synthesis, and in particular relates to a modeling and control method for synchronizing the voice and lip shape of a virtual character. Background technique [0002] Avatar modeling and rendering technologies are widely used in industries such as animation, games, and movies, and enabling avatars to speak with natural, smooth mouth movements that are synchronized with voice is the key to improving user experience. [0003] At present, lip-syncing virtual characters is a very time-consuming and labor-intensive task. Designers need to adjust the lip-sync configuration on the timeline according to the content of the audio. Some machine learning-based methods can learn models from a large number of lip animations and use the models to generate lip shapes for other input speech. However, this type of method relies on a large number of lip animations as training data,...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G10L15/02G10L15/06G10L15/25G10L13/02G10L13/04
CPCG10L13/00G10L13/02G10L15/02G10L15/06G10L15/25G10L2015/025G10L2015/0631
Inventor 朱风云陈博张志平庞在虎
Owner 北京灵伴未来科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products