Unlock instant, AI-driven research and patent intelligence for your innovation.

A facial motion capture method and system based on deep learning

A facial action, deep learning technology, applied in animation production, computer parts, instruments, etc., can solve the problem of low capture accuracy, achieve the effect of efficient parallel computing, real-time facial motion capture, and reduce production costs

Active Publication Date: 2022-04-12
ZHEJIANG LAB
View PDF16 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The former uses an optical lens to understand human facial expressions and movements through algorithms, such as Faceware’s helmet-mounted single-camera facial motion capture system. The advantages of this method are low cost, easy access, and easy to use. The disadvantage is that the capture accuracy is comparable to other methods Relatively low; the latter obtains two-dimensional data through an optical lens, and at the same time obtains depth information through additional means or devices, such as multi-eye cameras, structured light, etc. For example, Apple’s Animoji installs an infrared camera next to the front camera to collect depth information , this method has fast processing speed and high precision, but requires additional depth acquisition equipment

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A facial motion capture method and system based on deep learning
  • A facial motion capture method and system based on deep learning
  • A facial motion capture method and system based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0050] See figure 1 , a facial motion capture method based on deep learning, comprising the following steps:

[0051] S1: Use a depth camera to collect face video data and corresponding depth data to build a dataset;

[0052] In this embodiment, RealSense L515 is used to collect the original video and depth map, and the construction of the data set includes the following aspects:

[0053] S11: Construct the hybrid model of the human face in the video data of each said human face: reconstruct the 3D human face model under the neutral expression according to the depth map, and use the mesh deformation migration algorithm to obtain the mixed shape model, the mixed shape model contains sexual expression and n expression bases ( ), such as opening mouth, smiling, frowning, closing eyes, etc.

[0054] Optionally, the construction method of the mixed shape model is:

[0055] 1) Prepare a face template containing different expression bases;

[0056] 2) Restore the point cloud fr...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention discloses a facial motion capture method and system based on deep learning, comprising the following steps: S1: using a depth camera to collect video data and corresponding depth data of a human face, and constructing a data set; S2: constructing a facial motion recognition network, Use the data set to train the facial action recognition network; S3: input any video sequence into the trained facial action recognition network to predict the mixed shape coefficient; S4: apply the predicted mixed shape coefficient to any avatar, drive The facial movements of the avatar. The system includes a video acquisition module, a network training module, a facial movement prediction module, and an avatar animation display module. The algorithm of the present invention has a high operation speed, and only depth information is used for training during training. In the prediction stage, only the video taken by a single camera can be input to complete motion capture, without additional depth acquisition equipment, and facial motion capture can be performed in real time .

Description

technical field [0001] The present invention relates to the technical fields of computer vision and computer graphics, in particular to a deep learning-based facial motion capture method and system. Background technique [0002] Facial motion capture is a part of motion capture technology, which refers to the process of using mechanical devices, cameras and other equipment to record human facial expressions and movements, and convert them into a series of parameter data. Compared with artificially produced animated character expressions, the characters generated by capturing the facial movements of real people will be more realistic, and can greatly reduce the cost of artificial modeling. Nowadays, motion capture technology has become an indispensable production tool in the fields of film and television animation production, game development, and virtual reality. [0003] The current mainstream methods can be divided into: based on two-dimensional data and based on three-di...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06V40/16G06V40/20G06V20/40G06V10/82G06V10/774G06K9/62G06N3/04G06T13/40
CPCG06T13/40G06N3/045G06F18/214
Inventor 刘逸颖李太豪阮玉平马诗洁郑书凯
Owner ZHEJIANG LAB