Virtual figure expression driving method and system

A virtual character and driving method technology, applied in the field of virtual character expression driving, can solve the problems of the production method cannot be applied to the environment of real-time interaction, cannot meet the market demand, time-consuming and labor-intensive cost, etc., and achieves accurate and stable capture, low cost, Good real-time effect

Inactive Publication Date: 2018-04-20
BEIJING HULIAN YIDA TECH
View PDF11 Cites 10 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] At present, the expression animation produced by the traditional method requires a high technical level of animators, so not all animators can quickly produce accurate and natural facial expressions; It is extremely high and cannot meet the market demand; more importantly, the traditional production method cannot be applied to a real-time interactive environment at all, which brings limitations to the creativity of the program / performance

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Virtual figure expression driving method and system
  • Virtual figure expression driving method and system

Examples

Experimental program
Comparison scheme
Effect test

specific Embodiment approach

[0069] The present invention uses depth and color cameras to extract the information weight of facial expressions of real people through a new algorithm, and finally guides virtual characters to make the same expressions as real people in real time. According to the depth camera, the accurate point cloud data of human facial skin displacement is collected, and the color camera is used for data collection. The color camera is responsible for subdividing the facial expressions into 72 types through deep learning data, and mobilizing different facial muscles according to the displacement of facial muscles. The weight value is used to find the closest expression to drive the facial expression of the avatar. According to the application of different platforms, the actual platform computing power is different, and less than 72 weights can be used to adapt to low computing power platforms (such as mobile devices).

[0070] Description of the workflow of the facial capture system:

...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a virtual figure expression driving method and system. The method includes following steps: acquiring color information and depth information of human facial expressions; synthesizing the depth and color information of the human facial expressions, and extracting key information nodes; analyzing and comparing the key information nodes with a previously learned human facialexpression template to digitalize a practical human facial expression as a weight value, and transmitting the weight value to a middleware; performing standardization processing on the weight value bythe middleware, performing optimization processing to reduce transmission time-delay of data, and outputting the data to a corresponding engine port; and driving virtual figure expressions to changethrough the processed weight data. According to the technical scheme of the method, compared with pure video recording and post-rendering of a conventional three-dimensional production tool, virtual figures can be driven in real time, and technical support is provided for live-broadcasting programs or real-time interaction performance etc.

Description

technical field [0001] The technical field of virtual animation display of the present invention specifically relates to a virtual character expression driving method and system, which is a technology for real-time reflection of character facial expressions, demeanor, emotions, and atmosphere in virtual content, and is applied to expression production in all virtual content . Background technique [0002] With the maturity of technology in the field of virtual content production and the gradual acceptance of virtual content carriers in the market, the market urgently needs efficient and low-cost animation / expression production methods in the field of content production. The efficiency of traditional animators using traditional manual methods to produce expression animations can no longer meet the content production efficiency needs of the existing non-film industries; at the same time, the demand for real-time interaction in tourism, radio and television, games, and virtual ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T13/40G06N99/00
CPCG06T13/40G06N20/00
Inventor 刘福菊樊乙刘星辰常江
Owner BEIJING HULIAN YIDA TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products