Face three-dimensional reconstruction method based on end-to-end convolutional neural network

A convolutional neural network and three-dimensional reconstruction technology, applied in neural learning methods, biological neural network models, neural architectures, etc. Effect

Inactive Publication Date: 2018-11-13
ZHEJIANG UNIV
View PDF4 Cites 61 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The traditional film and television animation industry and the game entertainment field need to use a large number of character models, a large part of which requires professional 3D modelers to model manually. This process is very time-consuming and energy-intensive

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Face three-dimensional reconstruction method based on end-to-end convolutional neural network
  • Face three-dimensional reconstruction method based on end-to-end convolutional neural network
  • Face three-dimensional reconstruction method based on end-to-end convolutional neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0137] Embodiment 1. The inventor used this method to reconstruct a three-dimensional face model from a male profile picture, such as figure 2 shown. The first picture is the original face picture, the second picture is the result of face reconstruction to get the 3D model of the face rendered on the original picture, and the third picture is the 68 key points and their outlines obtained by alignment, these key points include Eyes nose mouth and lower contour. The face in the picture has a large deflection angle, and the scene is insufficiently illuminated, and a large area of ​​the face is in shadow. In this case, reconstruction is very difficult. From the reconstruction results, it can be seen that this method can still obtain good results in the face of large-angle rotation and complex lighting. At the same time, the results of high-precision face alignment also prove the accuracy of the reconstruction results from the side.

Embodiment 2

[0138] Embodiment 2. The inventor uses this method to reconstruct the result from a picture of a woman with a large occlusion. The meaning of the picture is the same as that of Embodiment 1. Part of the face in the picture is covered by hair, and the occlusion area is relatively complex. It can be seen that under the challenge of complex occlusion, the present invention can still obtain accurate reconstruction results. Among them, the main area of ​​the face, the ear, nose and mouth parts are accurately positioned, and the reconstruction error is small, which shows that the loss function design based on the weight mask is effective.

Embodiment 3

[0139] Embodiment 3. The inventor tested the effect of the algorithm flow of the present invention on a total of 3300 face pictures on two public face data sets. The experimental results show that the algorithm of the present invention can still accurately and efficiently reconstruct the aligned 3D face model in the face of difficult challenges such as large-angle rotation, complex lighting, occlusion, blurring and makeup. Traditional methods often do not have strong robustness in the face of these challenges, but this method is based on convolutional neural networks, and the model has strong expressive ability and.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a face three-dimensional reconstruction method based on a single face image. The method is based on an end-to-end convolutional neural network. In order to give full play to capability of a deep convolutional neural network, a face three-dimensional model is coded to a two-dimensional coordinate graph, a new loss function relatively fitting with face reconstruction is provided, and learning and prediction can be carried out through direct utilization of a lightweight end-to-end network. According to the algorithm, face alignment also can be carried out, and accurate three-dimensional face key point coordinates are obtained. Contrast experiments on a plurality of public face data sets show that according to the method, the accurate face three-dimensional model can bereconstructed through utilization of the single face image. Compared with the existing face reconstruction method, the method provided by the invention has the advantages that precision and speed arerelatively greatly improved.

Description

technical field [0001] The present invention relates to the field of three-dimensional reconstruction, in particular to a three-dimensional human face reconstruction method based on a single picture. Background technique [0002] The human face is the most distinguishable part of a person, and each person's face is different. We can quickly judge a person's identity, region and race through the appearance characteristics of the face, such as the height of the bridge of the nose, the depth of the eye sockets, the size of the eyes and the thickness of the lips. These appearance features are also one of the prerequisites for face recognition to function. In daily life, in addition to language, people also use a large number of facial expressions to communicate and convey information. The human face is one of the most important windows to understand a person. People's emotions are reflected in facial expressions. Face-related applications are one of the most important areas i...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T17/00G06N3/04G06N3/08
CPCG06N3/08G06T17/00G06N3/045
Inventor 任重俞云康周昆
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products