Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Model-Based Stereo Matching

a stereo matching and model technology, applied in the field of image processing, can solve the problems of lack of texture, unreliable conventional stereo matching techniques, lack of texture,

Active Publication Date: 2013-05-23
ADOBE SYST INC
View PDF3 Cites 30 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Benefits of technology

This patent describes a method for creating a high-quality depth map of a person's face using a combination of stereo matching and a 3D face model. The method takes into account the coarse shape of the person's face and the details from the 3D face model to create a smooth and accurate representation of the person's face. The method can be semi-automated and uses a fusion technique to combine the stereo results with the 3D model. It also uses a shape-from-shading method to refine the normals and capture fine facial details such as wrinkles. The quality of the normals allows for re-lighting of the person's face from different positions. The method can be applied to a fused model that combines the rough 3D face model with a laser-scanned face model, incorporating the details from both sources. The formulation is solved efficiently using a conjugated gradient method and integrates the confidence of the result. Additionally, the method uses loopy belief propagation for confidence estimation and shading information to refine the fused model.

Problems solved by technology

Conventional stereo matching techniques are unreliable in many cases due to occlusions (where a point may be visible in one stereo image but not the other), lack of texture (constant color, not much detail), and specular highlights (a highlighted portion that may move around in different camera views).
All of these difficulties exist when applying stereo matching techniques to human faces, with lack of texture being a particular problem.
While commercial stereo cameras are emerging, many if not most image processing applications do not provide tools to process stereo images, or, if they do, the tools have limitations.
Embodiments may apply stereo vision to the input stereo image pair to obtain a rough 3D face model, which may be limited in accuracy, and then use it to guide the registration and alignment of the laser-scanned face model.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Model-Based Stereo Matching
  • Model-Based Stereo Matching
  • Model-Based Stereo Matching

Examples

Experimental program
Comparison scheme
Effect test

example results

[0100]FIG. 11 illustrates modeling results for an example face, according to some embodiments. FIG. 11 (a) and FIG. 11 (b) are the input stereo images. FIG. 11 (c) is the close-up of the face in FIG. 11 (a). FIG. 11 (d) and FIG. 11 (e) are the confidence map and depth map computed from stereo matching, respectively. FIG. 11 (f) is the registered laser-scanned model and 11 (g) is the fused model. FIG. 11 (h)-(j) are the screenshots of the stereo model, laser-scanned model and fused model, respectively. FIG. 11 (k) is the estimated surface normal map, and FIG. 11 (l) is the re-lighted result of FIG. 11 (c) using the estimated normal map in FIG. 11 (k).

[0101]FIG. 11 illustrates modeling results of a person whose face is quite different from the laser-scanned model used, as can be seen from the stereo model in FIG. 11 (h) and registered laser-scanned model in FIG. 11 (i). The fused model is presented in FIG. 11 (j). The incorrect mouth and chin are corrected in FIG. 11 (j). FIG. 11 (k) ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

Model-based stereo matching from a stereo pair of images of a given object, such as a human face, may result in a high quality depth map. Integrated modeling may combine coarse stereo matching of an object with details from a known 3D model of a different object to create a smooth, high quality depth map that captures the characteristics of the object. A semi-automated process may align the features of the object and the 3D model. A fusion technique may employ a stereo matching confidence measure to assist in combining the stereo results and the roughly aligned 3D model. A normal map and a light direction may be computed. In one embodiment, the normal values and light direction may be used to iteratively perform the fusion technique. A shape-from-shading technique may be employed to refine the normals implied by the fusion output depth map and to bring out fine details. The normals may be used to re-light the object from different light positions.

Description

PRIORITY INFORMATION[0001]This application claims benefit of priority of U.S. Provisional Application Ser. No. 61 / 375,536 entitled “Methods and Apparatus for Model-Based Stereo Matching” filed Aug. 20, 2010, the content of which is incorporated by reference herein in its entirety.BACKGROUND[0002]1. Technical Field[0003]This disclosure relates generally to image processing, and more specifically, stereo image processing.[0004]2. Description of the Related Art[0005]Conventional stereo matching techniques are unreliable in many cases due to occlusions (where a point may be visible in one stereo image but not the other), lack of texture (constant color, not much detail), and specular highlights (a highlighted portion that may move around in different camera views). All of these difficulties exist when applying stereo matching techniques to human faces, with lack of texture being a particular problem. The difficulties apply to other types of objects as well. FIG. 1 illustrates an example...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(United States)
IPC IPC(8): G06K9/00
CPCG06T2207/30201G06T2207/10012H04N2013/0081G06T7/0075G06K9/00G06T7/593
Inventor COHEN, SCOTT D.YANG, QINGXIONG
Owner ADOBE SYST INC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products