A 3D face animation manufacturing method based on region segmentation and segmented learning

A 3D face and area segmentation technology, which is applied in animation production, 3D image processing, image data processing, etc., can solve the problem that animation is difficult to transfer

Inactive Publication Date: 2008-07-09
ZHEJIANG UNIV
View PDF0 Cites 39 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This approach allows for realistic and vivid animation of a specific charact

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A 3D face animation manufacturing method based on region segmentation and segmented learning
  • A 3D face animation manufacturing method based on region segmentation and segmented learning
  • A 3D face animation manufacturing method based on region segmentation and segmented learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0043] First, the motion data of the human face can be collected through the Hawk three-dimensional motion data capture device of the American MotionAnalysis formula. The main feature points of the performer’s face are marked with reflective materials, as shown in Figure 1 (left), or read from a computer storage device 3D motion data that has been captured; in addition, establish a 3D face mesh model through a 3DCamega 3D scanner, or read an existing 3D face mesh model from a computer storage device, and use a 3D software package to convert the 3D face mesh The model is drawn on the display. Calibrate the corresponding control points of the motion capture data on the 3D face grid through the computer input device, that is, mark all or part of the marker points in the motion capture data on the 3D face surface grid correspondingly, and establish the motion capture data and the 3D surface The corresponding relationship of the grid is shown in Fig. 1 (right).

[0044] After buil...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a 3D face cartoon animation method based on region segmentation and subsection learning. The invention has the following steps of: first obtaining sparse 3D movement capturing data and 3D face surface mesh; next calibrating the control point corresponding to the movement capturing data on the 3D face gridding; separating the rigidity and non-rigidity movement in the movement capturing data, and aligning the movement capturing data and the face model in static state; clustering the acmes of the face model based on the cosine distance of the displacement vector of the movement capturing data and the manifold distance of the 3D face gridding acmes; training the segmented surface patch to be subsection radial basis function; and fusing the border movement via the advanced Voronoi-Cell arithmetic. The invention can execute the automatic regional segmentation to a single face model, and model the face non-rigidity movement via subsection radial basis function to form the face cartoon with the third dimension.

Description

technical field [0001] The invention relates to a method for area division of a three-dimensional human face grid, relates to a method for facial expression movement and deformation of a three-dimensional human face grid, and belongs to the technical field of computer three-dimensional animation. Background technique [0002] The earliest work on 3D facial animation technology began in 1972. Subsequent scholars have done a lot of work in order to generate realistic and vivid 3D facial animation. However, due to the complex anatomical structure of the human face, the subtle non-rigid movements are difficult to model mathematically, and at the same time, people's sensitivity to the appearance of the human face makes this topic very difficult. At present, some progress has been made in the work of realistic 3D facial animation, which can basically be divided into the following categories: [0003] Interpolation-based methods: The earliest facial animations were based on inter...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T15/00G06T15/70G06T13/40
Inventor 庄越挺王玉顺肖俊吴飞
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products