Wednesday, October 18, 2006

[CMU VASC seminar series] High Resolution Acquisition, Tracking and Transfer of Dynamic 3D Facial

The advent of new technologies that allow the capture of massive amounts of high resolution, high frame rate face data, leads us to propose data-driven face models that describe detailed appearance of static faces as well as to track subtle geometry changes during expressions. However, since the dense data in these 3D scans are not registered in object space, inter-frame correspondences can not be established, which makes the tracking of facial features, estimation of facial expression dynamics and other analysis difficult.

In order to use such data for the temporal study of the subtle dynamics in expressions, an efficient non-rigid 3D motion tracking algorithm is needed to establish inter-frame correspondences. In this talk, I will present two frameworks for high resolution, non-rigid dense 3D point tracking. The first framework is a hierarchical scheme using a deformable generic face model. To begin with,a generic face mesh is first deformed to fit the data at a coarse level. Then in order to capture the highly local deformations, we use a variational algorithm for non-rigid shape registration based on the integration of an implicit shape representation and the Free-Form Deformations (FFD). The second framework, a fully automatic tracking method, is presented using harmonic maps with interior feature correspondence constraints. The novelty of this work is the development of an algorithmic framework for 3D tracking that unifies tracking of intensity and geometric features, using harmonic maps with added feature correspondence constraints. Due to the strong implicit and explicit smoothness constraints imposed by both algorithms and the high-resolution data, the resulting registration/deformation field is smooth and continuous. Both our methods are validated through a series of experiments demonstrating its accuracy and efficiency.

Furthermore, the availability of high quality dynamic expression data opens a number of research directions in face modeling. In this talk, several graphics applications will be demonstrated to use the motion data to synthesize new expressions as expression transfer from a source face to a target face.

Bio:

Yang Wang received his B.S. degree and M.Sc. degree in Computer Science from Tsinghua University in 1998 and 2000 respectively. He is a Ph.D. student in the Computer Science Department at the State University of New York at Stony Brook, where he has been working with Prof. Dimitris Samaras since 2000. He specializes in illumination modeling and estimation, 3D non-rigid motion tracking and facial expression analysis and synthesis.He is a member of ACM and IEEE.

No comments: