Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A real-time audio-driven virtual character lip synchronization control method

A virtual character and audio-driven technology, applied in the field of virtual character gesture control, can solve the problem of not being able to obtain the phoneme sequence corresponding to the voice synchronously, and achieve the effects of saving communication bandwidth, reducing implementation difficulty, and reducing complexity

Active Publication Date: 2021-06-01
大连即时智能科技有限公司
View PDF10 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0010] Therefore, in order to solve the above-mentioned situation where the phoneme sequence corresponding to the speech cannot be obtained synchronously, there is an urgent need for a method that can recognize the mouth sequence from the audio and use the mouth sequence to synchronously drive the mouth shape changes of the avatar

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A real-time audio-driven virtual character lip synchronization control method
  • A real-time audio-driven virtual character lip synchronization control method
  • A real-time audio-driven virtual character lip synchronization control method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0034] Embodiments of the present invention will be described below with reference to the drawings, but it should be realized that the invention is not limited to the described embodiments and that various modifications of the invention are possible without departing from the basic idea. The scope of the invention is therefore to be determined only by the appended claims.

[0035] Such as figure 1 As shown, a real-time audio-driven virtual character lip synchronization control method provided by the present invention includes the following steps:

[0036] A step of identifying viseme probabilities from a real-time speech stream;

[0037] a step of filtering the viseme probabilities;

[0038] A step of converting the sampling rate of the viseme probability into the same sampling rate as the rendering frame rate of the avatar;

[0039] A step of converting the viseme probabilities into a standard lip configuration and performing lip rendering.

[0040] Such as figure 2 As ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention disclosed a synchronous control method of virtual figure -driven virtual character -driven in real -time audio.This method includes the following steps: the steps of identifying the visual probability from the real -time voice flow; the step of filtering the visual permeable probability; the sampling rate of the visual priority probability is converted toStep of the rate; the steps of the visual permeability are converted to the standard port -shaped configuration and the steps of the oral shape rendering are converted.This method can avoid being required to synchronize the tone sequence or oral sequence information when passing the audio stream, which can significantly reduce the system complexity, coupling, and implementation difficulty. It is suitable for various application scenarios to render virtual characters on display devices.

Description

technical field [0001] The invention belongs to the field of posture control of a virtual character, and in particular relates to a method for synchronously controlling the mouth shape of a virtual character driven by real-time audio. Background technique [0002] Virtual character modeling and rendering technology is widely used in industries such as animation, games and movies. Enabling virtual characters to have natural, smooth and voice-synchronized mouth movements is the key to improving user experience. In a real-time system, it is necessary to synchronously play the audio acquired in real time in the form of a stream, and the avatar image rendered synchronously. During this process, the synchronization between the audio and the character's mouth shape needs to be ensured. [0003] Its application scenarios include: [0004] 1. The real-time audio is the voice generated by the speech synthesizer; [0005] 1.1. The phoneme sequence corresponding to the speech can be ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G10L21/10G10L21/18G10L25/57G10L15/02H04N21/43
CPCG10L15/02G10L21/10G10L21/18G10L25/57G10L2015/025G10L2021/105H04N21/4307
Inventor 朱风云陈博
Owner 大连即时智能科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products