Limb movement and language factor matching method and device for virtual image

A virtual image and body movement technology, applied in the field of data processing, can solve problems such as mismatch between movement and language, inconsistent expression, poor synchronization between language and movement, and achieve the effect of language and movement synchronization and consistent expression

Pending Publication Date: 2021-10-22
小哆智能科技(北京)有限公司
View PDF6 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] To this end, the present invention provides a method and device for matching body movements and language factors of avatars, which solves the problems in the prior art that the virtual digital human's movements and language do not match, the synchronization of language and movement is poor, and the problems of different expressions

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Limb movement and language factor matching method and device for virtual image
  • Limb movement and language factor matching method and device for virtual image

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0036] see figure 1 , providing a body movement and language factor matching method for an avatar, comprising the following steps:

[0037] S1. Generation of body movements of the avatar: preset custom actions, the custom actions include the position of the avatar on the map and the movement path of the body, and generate corresponding two-dimensional action data for the custom actions;

[0038] S2. Avatar and language matching interaction: perform semantic learning on the two-dimensional data of the action, and perform avatar and language matching interaction to generate motion control information;

[0039] S3. Avatar skeletal drive: transmit the motion control information to the underlying driver of the avatar, and the underlying driver controls the skeletal drive action of the avatar according to the motion control information.

[0040] Specifically, the skeleton driving includes data storage, image display and frame animation processing. According to the received differe...

Embodiment 2

[0062] see figure 2 , the present invention also provides a body movement and language factor matching device for an avatar, using the body movement and language factor matching method for an avatar in Embodiment 1, including:

[0063] The avatar body movement generation module 1 is used to preset custom actions, the self-definition action includes the position of the avatar in the map and the movement path of the limbs, and generates the corresponding two-dimensional action data of the self-definition action;

[0064] The avatar and language matching interaction module 2 is used for performing semantic learning on the two-dimensional action data, and performing avatar and language matching interaction to generate motion control information;

[0065] The avatar skeleton driving module 3 is configured to transmit the motion control information to the underlying driver of the avatar, and the underlying driver controls the skeleton driving action of the avatar according to the m...

Embodiment 3

[0076] Embodiment 3 of the present invention provides a computer-readable storage medium, and the computer-readable storage medium stores program codes for the method for matching body movements and language factors of avatars, and the program codes include the program codes used to implement Embodiment 1. Instructions for matching methods of body movements and language factors of avatars in any possible implementation thereof.

[0077] The computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server, a data center, etc. integrated with one or more available media. The available medium may be a magnetic medium (for example, a floppy disk, a hard disk, or a magnetic tape), an optical medium (for example, DVD), or a semiconductor medium (for example, a solid state disk (SolidStateDisk, SSD)) and the like.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a body movement and language factor matching method and device for a virtual image, and the method comprises the steps: virtual image limb movement generation: presetting a user-defined movement which comprises the position of the virtual image in a map and a body movement path, and enabling the user-defined movement to generate corresponding movement two-dimensional data; virtual image and language matching interaction: performing semantic learning on the action two-dimensional data, performing virtual image and language matching interaction, and generating motion control information; and virtual image skeleton driving: transmitting the motion control information to a bottom layer driver of the virtual image, wherein the bottom layer driver controls a skeleton driving action of the virtual image according to the motion control information. According to the method and device, the semantic and action matching of the virtual image is realized, so that the emotional expression, expression interaction, limb actions and the like in the communication process are close to a real person to the maximum extent, and the language and the action are synchronous and consistent in expression.

Description

technical field [0001] The invention belongs to the technical field of data processing, and in particular relates to a method and device for matching body movements and language factors of virtual images. Background technique [0002] In recent years, virtual character technology has attracted more and more attention from major technology companies. Virtual digital characters use AI technologies such as voice interaction and avatar generation to endow cultural and entertainment IP characters with the ability to interact in multiple modes, helping media, education, exhibitions, customer service, etc. Double upgrade of intelligent entertainment in the industry. [0003] With the development of my country's economy and society, the professional level of the service industry has been continuously improved, and people's requirements for the service industry have also been continuously improved. Therefore, many virtual digital human images have emerged, which are combined with kno...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T13/80G06T13/20G06T13/40G06F40/30G10L15/18G10L15/26G10L13/027G06F16/332G06F16/33G06N20/00
CPCG06T13/80G06T13/205G06T13/40G06F40/30G10L15/18G10L15/26G10L13/027G06F16/3329G06F16/3343G06N20/00
Inventor 余国军虞强尹川
Owner 小哆智能科技(北京)有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products