Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method and system for driving character gesture by voice

A technology for driving characters and gestures, applied in the field of computer vision, can solve problems such as inability to generate continuous gestures, inability to properly generate two gestures at the same time, and achieve the effect of wide application

Active Publication Date: 2021-02-05
北京中科深智科技有限公司
View PDF19 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] However, prior art speech gesture generation systems use a single modality to represent speech, namely: audio or text
As a result, these speech gesture generation systems can only produce audio-related beat gestures or text-related gestures, such as raising a hand when saying "high," but cannot properly generate both gestures simultaneously, much less continuously gesture

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method and system for driving character gesture by voice
  • Method and system for driving character gesture by voice

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0018] The technical solutions of the present invention will be further described below in conjunction with the accompanying drawings and through specific implementation methods.

[0019] Wherein, the accompanying drawings are only for illustrative purposes, showing only schematic diagrams, rather than physical drawings, and should not be construed as limitations on this patent; in order to better illustrate the embodiments of the present invention, some parts of the accompanying drawings will be omitted, Enlargement or reduction does not represent the size of the actual product; for those skilled in the art, it is understandable that certain known structures and their descriptions in the drawings may be omitted.

[0020] A voice-driven character gesture method provided by an embodiment of the present invention, such as figure 1 shown, including the following:

[0021] Extract text features and audio features in the speech signal;

[0022] Input the text features and audio f...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a method and a system for driving a character gesture by voice. The method comprises the following steps: extracting text features and audio features in a voice signal; inputting the text features and the audio features into an autoregressive model to predict and obtain a current-stage joint angle rotation sequence through the autoregressive model, and feeding back the current-stage joint angle rotation sequence to the autoregressive model for predicting a next-stage joint angle rotation sequence; and generating a gesture through the current joint angle rotation sequence, and synthesizing and outputting the gesture and the voice signal. According to the invention, two gestures can be generated at the same time, and continuous gestures can be obtained by setting theprediction structure of the autoregression model, so that a vivid effect can be achieved, a user can conveniently perceive the emotion of a virtual character, and the method can be widely applied to virtual agents and humanoid robots.

Description

technical field [0001] The invention relates to the technical field of computer vision, in particular to a method and system for voice-driven character gestures. Background technique [0002] In the real world, when people speak, they will be accompanied by gestures. Gestures reflect the emotional state of the speaker and play a key role in the transmission of information. Therefore, the virtual agent or the virtual character in the animation also needs to be accompanied by gestures in the process of speaking, so as to achieve a realistic effect and facilitate the user to perceive the emotion of the virtual character. [0003] However, speech gesture generation systems in the prior art use a single modality to represent speech, namely: audio or text. As a result, these speech gesture generation systems can only produce audio-related beat gestures or text-related gestures, such as raising a hand when saying "high," but cannot properly generate both gestures simultaneously, m...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F3/01
CPCG06F3/017
Inventor 不公告发明人
Owner 北京中科深智科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products