End-to-end three-dimensional human face reconstruction method based on depth neural network

A deep neural network and 3D face technology, applied in biological neural network models, neural architecture, 3D modeling, etc., can solve problems such as recognition and reconstruction effects

Inactive Publication Date: 2017-10-24
SHENZHEN WEITESHI TECH
View PDF0 Cites 28 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] Aiming at the problem that posture, expression and illumination changes in facial images will also affect recognition and reconstruction, the purpose of the present invention is to provide an end-to-end 3D face reconstruction method based on deep neural network, using 3D facial shape Spatial model, and 3D face as a linear combination of a set of shapes and mixed shape baselines, the face model based on the VGG network adds a sub-convolutional neural network (fused CNN) for regression expression parameters, and for identity parameter prediction and Multi-task learning loss function for expression parameter prediction, the input of deep neural network in end-to-end training is a two-dimensional image, and the output consists of identity parameter vector and expression parameter vector

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • End-to-end three-dimensional human face reconstruction method based on depth neural network
  • End-to-end three-dimensional human face reconstruction method based on depth neural network
  • End-to-end three-dimensional human face reconstruction method based on depth neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0028] It should be noted that, in the case of no conflict, the embodiments in the present application and the features in the embodiments can be combined with each other. The present invention will be further described in detail below in conjunction with the drawings and specific embodiments.

[0029] figure 1 It is a system framework diagram of an end-to-end three-dimensional face reconstruction method based on a deep neural network of the present invention. It mainly includes 3D facial shape subspace model, deep neural network (DNN) architecture, and end-to-end training.

[0030] 3D face shape subspace model, which takes a 3D face shape subspace model and treats the 3D face as a linear combination of a set of shapes and a blend shape baseline:

[0031]

[0032] where S is the target 3D face, is the mean face shape, U d is the principal component trained on 3D face scans, α d is the identity parameter vector, U e is the main component of offset training, α e is the...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention proposes an end-to-end three-dimensional human face reconstruction method based on a depth neural network, and the method mainly comprises the contents: a 3D face shape subspace model, the depth neural network (DNN) configuration and end-to-end training. The process comprises the steps: employing the 3D face shape subspace model, and enabling a 3D face to serve as a linear combination of shapes and mixed-shape base lines; adding a sub-convolution neural network (fusion CNN) to a face model based on a VGG network for regression of expression parameters; adding a multitask learning loss function for identity parameter prediction and expression parameter prediction, wherein the input of the DNN in the end-to-end training is a two-dimensional image, and the output consists of an identity parameter vector and an expression parameter vector. According to the invention, the method solves a problem of impact caused by the posture, expression and illumination changes in a face image, and avoids the loss of the depth information in a process of image collection. Meanwhile, the method simplifies the frame, reduces the calculation cost, and improves the reconstruction precision and the recognition robustness.

Description

technical field [0001] The invention relates to the field of face reconstruction, in particular to an end-to-end three-dimensional face reconstruction method based on a deep neural network. Background technique [0002] Face is one of the most important biological characteristics of human beings, reflecting a lot of important biological information, such as identity, gender, race, age, expression, etc. 3D face reconstruction technology has a wide range of uses and prospects, and has always been a hot and difficult point in the research of computer vision and computer graphics. Face modeling has broad application prospects in many fields such as face recognition systems, medicine, movies and TV shows, advertisements, computer animation, games, video conferencing, video telephony, and human-computer interaction. Especially in face recognition, it can be used in many fields such as public security prevention, fugitive hunting, network security, financial security, and shopping...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T17/00G06N3/04
CPCG06N3/04G06T17/00
Inventor 夏春秋
Owner SHENZHEN WEITESHI TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products