Facial feature point tracking and facial animation method based on single video vidicon

A facial feature and camera technology, applied in animation production, image analysis, image data processing, etc., can solve problems such as unavailable, universal, accurate, and robust face animation methods, and achieve good user experience and robust processing effect, good fast motion effect

Active Publication Date: 2014-07-23
ZHEJIANG UNIV
View PDF2 Cites 44 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

These expressions have been extensively studied in the past, and none of them can provide a general, accurate and robust method for face animation.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Facial feature point tracking and facial animation method based on single video vidicon
  • Facial feature point tracking and facial animation method based on single video vidicon
  • Facial feature point tracking and facial animation method based on single video vidicon

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0123] The inventor realized the implementation example of the present invention on a desktop computer equipped with an Intel Core i5 (3.0GHz) central processing unit, and a network camera providing 640×480 resolution at 30 frames per second. In one-shot preprocessing, the training of the universal regressor takes about 6 hours. In operation, for each frame, it takes about 12 milliseconds to complete the regression of the shape vector, 3 milliseconds to complete the post-processing of the shape vector, and 5 milliseconds to update the global parameters, so that the system can finally reach an average speed of 28 frames per second .

[0124] The inventor invited different new users to test the system based on the present invention. The results show that the system can obtain the accurate two-dimensional feature point position in the image without any preprocessing work for any new user, and at the same time accurately obtain the three-dimensional dynamic expression parameters ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a facial feature point tracking and facial animation method based on a single video vidicon. The method includes the following steps of establishing a regression device training set on the basis of a published facial image database and using the regression device training set as input training to obtain a DDE model regression device; using the regression device, carrying out regression calculation on input images to obtain corresponding shape vectors and calculating two-dimensional positions of facial feature points in the images according to the shape vectors; carrying out postprocessing on the shape vectors so that expression coefficients in the shape vectors can meet certain constraints; updating global parameters by means of combination of the two-dimensional positions of the facial feature points and the shape vectors after postprocessing; obtained three-dimensional dynamic expression parameters are mapped to a virtual substitute and driving animated characters to carry out facial animation. The method is oriented to general users, any preprocessing operation does not need to be carried out for specific users, and new users can directly use a system. According to the method, quick movement, drastic translation and rotation can be processed well, and violent illumination changes can be handled better.

Description

technical field [0001] The invention relates to face feature point tracking in images, face motion capture and real-time animation technology, in particular to a face feature point tracking and face animation method based on a single video camera. Background technique [0002] The relevant research background of the present invention is briefly described as follows: [0003] Facial motion capture and facial landmark tracking have been widely studied in the fields of computer graphics and vision research. In these studies, many methods have been used to capture expressions from one subject and transfer them to another target model. In commercial applications (such as movies and games), some special devices, such as facial landmarks (Huang, H., Chai, J., Tong, X., and Wu, H., T., 2011. Leveraging motion capture and3d scanning for high-field facial performance acquisition. ACM Trans.Graph.30,4,74:1-74:10.), camera array (Bradley, D., Heidrich, W., Popa, T., and Sheffer, A.20...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T13/40G06T7/00
Inventor 周昆曹晨侯启明
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products