Multi-RGB-D full face material recovery method based on deep learning

A technology of deep learning and restoration method, applied in the field of face 3D reconstruction, can solve the problems of lack of standardized data sets and texture data, difficult to restore texture image materials, unable to cover the expression of the back of the human head, etc., to expand the data range, Improve the effect of optimization and strong practicability

Active Publication Date: 2021-08-24
ZHEJIANG UNIV
View PDF5 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Only a single RGB image is input, and only the geometry and material reconstruction of the front face can be performed, and the expression behind the head cannot be covered.
In addition, in the current reconstruction method that inputs multiple RGB-D data, it is still difficult to restore the texture of the mapped texture image
There are still relatively few algorithms for image processing and material restoration in the full face range, and there is no effective standardization specification for data sets and texture data

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-RGB-D full face material recovery method based on deep learning
  • Multi-RGB-D full face material recovery method based on deep learning
  • Multi-RGB-D full face material recovery method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0077] The inventor tested the effectiveness of the differentiable rendering optimization module in step 2 in a simulation data set. Such as Figure 8 Figure (A) is the original image, Figure (B) is the schematic diagram of the 0th iteration, Figure (C) is the schematic diagram of the 10th iteration, and Figure (D) is the schematic diagram of the 150th iteration. As the number of iterations increases, the optimized material data is closer to the standard value than the initial results obtained using only the material estimation module.

Embodiment 2

[0079] The inventor tested the effectiveness of the differentiable rendering optimization module in step 2 for improving the loss function in the simulation data set. Figure 9 The test situation of a group of samples is shown, in which, picture (A) is a schematic diagram of the input image, picture (B) is a schematic diagram of the rendering result before improvement, picture (C) is a schematic diagram of the rendering result after improvement, and picture (D) is The albedo standard map, picture (E) is a schematic diagram of the albedo result before improvement, and picture (F) is a schematic diagram of the albedo result after improvement. It can be seen that before the loss function is improved, the error of the rendering result is small, but the error of restoring the texture is large. After the loss function is improved, the error of the rendering result is almost unchanged, and the restoration effect of the albedo texture is significantly improved.

Embodiment 3

[0081] The inventors tested the effectiveness of the inventive method on real samples. Such as Figure 10 Shown is the comparison diagram of the material restoration effect of the present invention in the actual sample test, wherein, Figure (A) is a schematic diagram of collecting photos, Figure (B) is a texture image synthesized by equipment, and Figure (C) is an optimized rendering result figure, Figure (D) is a schematic diagram of putting back the original image for comparison. This method can restore the full face texture range including ears and neck, and the restored material data has high fidelity.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a multi-RGB-D full face material recovery method based on deep learning. The method comprises two steps of face material information estimation based on an image and gradient optimization based on differential rendering. The method comprises the following steps: step 1, firstly preprocessing geometric and texture data to generate a mask containing a full-face skin part; then constructing a texture estimation module and an illumination estimation module, and generating a simulation training data set; and finally, obtaining initial values of texture information and illumination coefficients by using a material texture and illumination estimation module and the simulation training data set. step 2, firstly processing and scanning geometric data, and then expanding to realize a full-face rendering equation; improving the loss function to obtain an optimization result; and finally, performing detail optimization on the special region. According to the invention, the data range of the face material recovery technology can be expanded, and the optimization effect of the material recovery technology is improved.

Description

technical field [0001] The present invention relates to the field of three-dimensional reconstruction of human face, in particular to a multi-RGB-D full-face material restoration method based on deep learning. [0002] technical background [0003] Today, with the increasing development of smart phone entertainment applications, face applications can be better developed by obtaining geometric and texture information through face 3D information reconstruction. The face 3D information reconstruction method generally mainly includes three modules: face geometric reconstruction, face texture mapping and texture material restoration. The current 3D face reconstruction technology can reconstruct geometric and texture information by inputting one or more RGB images, and can also obtain more refined geometric information and texture mapping results by inputting RGB-D data. [0004] However, the algorithms that have been implemented so far also have some deficiencies. Only a single ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T17/00G06T15/00
CPCG06T17/00G06T15/005G06T2200/04
Inventor 任重於航翁彦琳周昆
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products