Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Real-time audio-driven virtual character mouth shape synchronous control method

A virtual character, audio-driven technology, applied in the field of virtual character gesture control, can solve the problem of not being able to obtain the phoneme sequence corresponding to the voice synchronously, and achieve the effects of saving communication bandwidth, improving user experience, and reducing system complexity

Active Publication Date: 2020-04-28
大连即时智能科技有限公司
View PDF10 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0010] Therefore, in order to solve the above-mentioned situation where the phoneme sequence corresponding to the speech cannot be obtained synchronously, there is an urgent need for a method that can recognize the mouth sequence from the audio and use the mouth sequence to synchronously drive the mouth shape changes of the avatar

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Real-time audio-driven virtual character mouth shape synchronous control method
  • Real-time audio-driven virtual character mouth shape synchronous control method
  • Real-time audio-driven virtual character mouth shape synchronous control method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0034] Embodiments of the present invention will be described below with reference to the drawings, but it should be realized that the invention is not limited to the described embodiments and that various modifications of the invention are possible without departing from the basic idea. The scope of the invention is therefore to be determined only by the appended claims.

[0035] Such as figure 1 As shown, a real-time audio-driven virtual character lip synchronization control method provided by the present invention includes the following steps:

[0036] A step of identifying viseme probabilities from a real-time speech stream;

[0037] a step of filtering the viseme probabilities;

[0038] A step of converting the sampling rate of the viseme probability into the same sampling rate as the rendering frame rate of the avatar;

[0039] A step of converting the viseme probabilities into a standard lip configuration and performing lip rendering.

[0040] Such as figure 2 As ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention belongs to the field of virtual character posture control, and particularly relates to a real-time audio-driven virtual character mouth shape synchronous control method. The method comprises the following steps: a step of identifying the opinion probability from a real-time voice stream, a step of filtering the opinion probability, converting the sampling rate of the pixel probability into a sampling rate which is the same as a rendering frame rate of the virtual character, converting the visual element probability into standard mouth shape configuration and carrying out mouth shape rendering. According to the method, the requirement for synchronously transmitting phoneme sequence or mouth shape sequence information during audio stream transmission can be avoided, the systemcomplexity, the coupling degree and the implementation difficulty can be remarkably reduced, and the method is suitable for various application scenes for rendering virtual characters on display equipment.

Description

technical field [0001] The invention belongs to the field of posture control of a virtual character, and in particular relates to a method for synchronously controlling the mouth shape of a virtual character driven by real-time audio. Background technique [0002] Virtual character modeling and rendering technology is widely used in industries such as animation, games and movies. Enabling virtual characters to have natural, smooth and voice-synchronized mouth movements is the key to improving user experience. In a real-time system, it is necessary to synchronously play the audio acquired in real time in the form of a stream, and the avatar image rendered synchronously. During this process, the synchronization between the audio and the character's mouth shape needs to be ensured. [0003] Its application scenarios include: [0004] 1. Real-time audio is the voice generated by the speech synthesizer [0005] 1.1. The phoneme sequence corresponding to the speech can be obtai...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G10L21/10G10L21/18G10L25/57G10L15/02H04N21/43
CPCG10L15/02G10L21/10G10L21/18G10L25/57G10L2015/025G10L2021/105H04N21/4307
Inventor 朱风云陈博
Owner 大连即时智能科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products