Video flow based three-dimensional dynamic human face expression model construction method
A facial expression, three-dimensional dynamic technology, which is applied in the intersection of computer vision and computer graphics, can solve the problems that corner detection and matching are not robust enough, the constraints are too strict, and it is difficult to accurately reflect the local features of the face.
- Summary
- Abstract
- Description
- Claims
- Application Information
AI Technical Summary
Problems solved by technology
Method used
Image
Examples
Example Embodiment
[0090] Example 1
[0091] An example of modeling an angry expression:
[0092] Step 1: The input video has 100 frames, and 40 predefined feature points are marked on the first frame of the uncalibrated monocular video, and the feature points are shown in Figure 2;
[0093] Step 2: Use the optical flow method of affine correction to track the feature points robustly, and use the eight feature points of the corners of the mouth, the inner and outer corners of the eyes, and the sideburns on both sides to calculate the affine transformation between the two frames. This affine transformation optimizes the optical flow tracking results of the remaining 32 feature points;
[0094] Step 3: recover the three-dimensional coordinates of the feature points by using the algorithm based on factorization, and obtain the personalized face model and expression effect by deforming the general face;
[0095] Step 4: Use the average value of the coordinates of the three-dimensional feature poin...
Example Embodiment
[0100] Example 2
[0101] Modeling example of surprised expression:
[0102] Step 1: The input video has 80 frames, and 40 predefined feature points are marked on the first frame of the uncalibrated monocular video;
[0103] Step 2: Use the optical flow method of affine distance correction to track the feature points robustly, and use the eight feature points of the corners of the mouth, the inner and outer corners of the eyes, and the sideburns on both sides to calculate the affine transformation between the two frames. Use this affine transformation to optimize the optical flow tracking results of the remaining 32 feature points;
[0104] Step 3: recover the three-dimensional coordinates of the feature points by using the algorithm based on factorization, and obtain the personalized face model and expression effect by deforming the general face;
[0105] Step 4: Use the average value of the coordinates of the three-dimensional feature points in the first three frames as th...
Example Embodiment
[0110] Example 3
[0111] Modeling example of fear expression:
[0112] Step 1: The input video has 100 frames, and 40 predefined feature points are marked on the first frame of the uncalibrated monocular video;
[0113] Step 2: Use the optical flow method of affine correction to track the feature points robustly, and use the eight feature points of the corners of the mouth, the inner and outer corners of the eyes, and the sideburns on both sides to calculate the affine transformation between the two frames. This affine transformation optimizes the optical flow tracking results of the remaining 32 feature points;
[0114] Step 3: recover the three-dimensional coordinates of the feature points by using the algorithm based on factorization, and obtain the personalized face model and expression effect by deforming the general face;
[0115] Step 4: Use the average value of the coordinates of the three-dimensional feature points in the first three frames as the three-dimensional...
PUM
Abstract
Description
Claims
Application Information
- R&D Engineer
- R&D Manager
- IP Professional
- Industry Leading Data Capabilities
- Powerful AI technology
- Patent DNA Extraction
Browse by: Latest US Patents, China's latest patents, Technical Efficacy Thesaurus, Application Domain, Technology Topic.
© 2024 PatSnap. All rights reserved.Legal|Privacy policy|Modern Slavery Act Transparency Statement|Sitemap