Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method for reconstructing a three-dimensional facial expression model based on a monocular video

A 3D face and facial expression technology, applied in 3D modeling, character and pattern recognition, details involving processing steps, etc., can solve the problem that it is difficult to ensure the accuracy of the reconstructed coarse-scale face shape, and cannot obtain good recovery results, etc. question

Active Publication Date: 2019-04-05
BEIHANG UNIV
View PDF2 Cites 25 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] At present, the 3D facial expression model reconstruction technology based on monocular video relies on the input of 2D and 3D feature point position information of monocular video and the restored camera matrix. The most common method is to project the restored 3D model onto a 2D plane. The optimization of the similarity between the brightness value of the input image and the brightness value of the input image, this type of method cannot get good recovery results for large facial expression changes, illumination changes and slight occlusions
Moreover, the feature points are too sparse to ensure the accuracy of reconstructing the coarse-scale face shape, especially the facial area far away from the feature points.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for reconstructing a three-dimensional facial expression model based on a monocular video
  • Method for reconstructing a three-dimensional facial expression model based on a monocular video
  • Method for reconstructing a three-dimensional facial expression model based on a monocular video

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0047] The present invention will be described in detail below in conjunction with the accompanying drawings and embodiments.

[0048] Such as figure 1 Shown, concrete steps of the present invention are as follows:

[0049] (1) Multi-frame 2D dense optical flow calculation based on subspace constraints

[0050] The two-dimensional optical flow field can be regarded as composed of the motion vector of each pixel of the image on the two-dimensional plane, defining the position of any pixel j on the image relative to its position in the reference image (x 1j ,y 1j )’s 2D motion vector is:

[0051] w j =[u 1j u 2j … u Fj |v 1j v 2j …v Fj ] T

[0052] Suppose there are P pixels in the image, and the input video has F frames, where u ij =x ij -x 1j , v ij =y ij -y 1j ,(x ij ,y ij ) is the coordinates of the i-th frame of point j, (x1j ,y 1j ) is the coordinates of point j in the first frame, that is, the reference image, 1≤i≤F, 1≤j≤P.

[0053] Based on the tr...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a method for reconstructing a three-dimensional facial expression model based on a monocular video. Extra multi-angle shooting is not needed, A universal 3D face model is directly driven from a neutral expression image frame in an input monocular video to deform so as to generate a personalized three-dimensional face template; and expressing the deformation of the three-dimensional face expressions corresponding to different frames as the change of the personalized three-dimensional face template in the 3D vertex flow of the three-dimensional space, and solving the coarse-scale geometric model of the face expressions through the consistency with the change of the 2D optical flow. While the shape accuracy of a coarse-scale reconstruction model is improved by utilizing dense optical flow, the shooting requirement of an input video is broadened, details are added on a recovered coarse-scale face model through a light and shade recovery shape technology so as to recover fine-scale face geometric data, and a three-dimensional face geometric model with high fidelity is reconstructed.

Description

technical field [0001] The invention relates to a method for reconstructing a three-dimensional facial expression model based on monocular video, and belongs to the technical field of computer virtual reality. Background technique [0002] Realistically reconstructed 3D facial expression models are widely used in computer games, film and television production, social, medical and other fields. Traditional 3D facial model acquisition and reconstruction mostly rely on heavy and expensive hardware equipment and controllable lighting in the laboratory environment. With the rapid entry of virtual reality technology and mobile smart terminals into public life, more and more people hope to obtain high-quality 3D facial expression models in daily life through low-cost equipment and apply them in virtual environments. Using mobile phones and cameras to shoot videos, or directly using Internet videos to reconstruct 3D facial expression models, minimizes the complexity of obtaining eq...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T17/00G06T7/215G06K9/00
CPCG06T7/215G06T17/00G06T2207/10016G06T2200/08G06T2207/30201G06V40/168G06V40/174Y02T10/40
Inventor 王珊沈旭昆
Owner BEIHANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products