Video flow based three-dimensional dynamic human face expression model construction method

A facial expression, three-dimensional dynamic technology, which is applied in the intersection of computer vision and computer graphics, can solve the problems that corner detection and matching are not robust enough, the constraints are too strict, and it is difficult to accurately reflect the local features of the face.

Inactive Publication Date: 2007-02-28
ZHEJIANG UNIV
View PDF0 Cites 75 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Literature [5] uses normalized orthogonal images to model faces and uses muscle vectors to drive expressions. The disadvantage is that the position of muscle vectors is difficult to set correctly, and the orthogonal constraints are too strict, which makes the method lack of generalization
Literature [6] uses two frontal images to model the face. The camera must be calibrated in advance, and the reconstructed feature points are relatively small. It is difficult to accurately reflect the local features of the face only by interpolating the feature points to generate a face grid.
Literature [7] also uses orthogonal images to optimize the face model through a step-by-step refinement process, which also has the disadvantage of too strict constraints
Li Zhang et al. [8] used structured light to reconstruct human facial expressions from video streams through stereo vision, which requires hardware devices including structured light projectors, and the scanned model needs cumbersome manual preprocessing. Ambient light requirements are high
The method proposed by Zicheng Liu et al. [

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video flow based three-dimensional dynamic human face expression model construction method
  • Video flow based three-dimensional dynamic human face expression model construction method
  • Video flow based three-dimensional dynamic human face expression model construction method

Examples

Experimental program
Comparison scheme
Effect test

Example Embodiment

[0090] Example 1

[0091] An example of modeling an angry expression:

[0092] Step 1: The input video has 100 frames, and 40 predefined feature points are marked on the first frame of the uncalibrated monocular video, and the feature points are shown in Figure 2;

[0093] Step 2: Use the optical flow method of affine correction to track the feature points robustly, and use the eight feature points of the corners of the mouth, the inner and outer corners of the eyes, and the sideburns on both sides to calculate the affine transformation between the two frames. This affine transformation optimizes the optical flow tracking results of the remaining 32 feature points;

[0094] Step 3: recover the three-dimensional coordinates of the feature points by using the algorithm based on factorization, and obtain the personalized face model and expression effect by deforming the general face;

[0095] Step 4: Use the average value of the coordinates of the three-dimensional feature poin...

Example Embodiment

[0100] Example 2

[0101] Modeling example of surprised expression:

[0102] Step 1: The input video has 80 frames, and 40 predefined feature points are marked on the first frame of the uncalibrated monocular video;

[0103] Step 2: Use the optical flow method of affine distance correction to track the feature points robustly, and use the eight feature points of the corners of the mouth, the inner and outer corners of the eyes, and the sideburns on both sides to calculate the affine transformation between the two frames. Use this affine transformation to optimize the optical flow tracking results of the remaining 32 feature points;

[0104] Step 3: recover the three-dimensional coordinates of the feature points by using the algorithm based on factorization, and obtain the personalized face model and expression effect by deforming the general face;

[0105] Step 4: Use the average value of the coordinates of the three-dimensional feature points in the first three frames as th...

Example Embodiment

[0110] Example 3

[0111] Modeling example of fear expression:

[0112] Step 1: The input video has 100 frames, and 40 predefined feature points are marked on the first frame of the uncalibrated monocular video;

[0113] Step 2: Use the optical flow method of affine correction to track the feature points robustly, and use the eight feature points of the corners of the mouth, the inner and outer corners of the eyes, and the sideburns on both sides to calculate the affine transformation between the two frames. This affine transformation optimizes the optical flow tracking results of the remaining 32 feature points;

[0114] Step 3: recover the three-dimensional coordinates of the feature points by using the algorithm based on factorization, and obtain the personalized face model and expression effect by deforming the general face;

[0115] Step 4: Use the average value of the coordinates of the three-dimensional feature points in the first three frames as the three-dimensional...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a three-dimension dynamic face pathetic model construction method based on video flow, which can return the three-dimension face pathetic based on input video flow, wherein the algorism comprises: (1) marking face character point at the first frame of input video; (2) using light stream method of affine correction to track the character point; (3) rebuilding the two-dimension track data based on factor decomposition into three-dimension data; (4 using rebuilt three-dimension date to match general face model, to generate personal face and dynamic pathetic motion; (5) using character face technique to compress the original video; (6) using character face to rebuild input video and projecting dynamic pattern, to compose true virtual appearance. The invention has high time/spatial efficiency and high value.

Description

technical field [0001] The invention relates to the intersecting fields of computer vision and computer graphics, in particular to a three-dimensional dynamic facial expression modeling method based on video streams. Background technique [0002] Personalized face modeling and realistic expression animation generation have always been a challenging topic, and have been widely used in virtual reality, film production, game entertainment, etc. Since the pioneering work of Parke [1] in 1972, research on face and expression modeling has made great progress. According to the different input data required, the modeling methods are mainly divided into the following categories: modeling based on captured 3D sample data; modeling based on images; modeling based on video streams. Blanz et al. [2] build a personalized face model based on an input face image by learning the statistical features in the 3D face database, which requires the use of expensive laser scanning equipment to pre...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T17/00G06T15/00G06T13/40
Inventor 庄越挺张剑肖俊王玉顺
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products