Wednesday, March 18, 2009

CMU talk: Computational Study Of Nonverbal Social Communication

Special VASC Seminar
Thursday, March 19, 2009

Computational Study Of Nonverbal Social Communication
Louis-Philippe Morency
USC Institute for Creative Technologies

Abstract:
The goal of this emerging research field is to recognize, model and predict human nonverbal behavior in the context of interaction with virtual humans, robots and other human participants. At the core of this research field is the need for new computational models of human interaction emphasizing the multi-modal, multi-participant and multi-behavior aspects of human behavior. This multi-disciplinary research topic overlaps the fields of multi-modal interaction, social psychology, computer vision, machine learning and artificial intelligence, and has many applications in areas as diverse as medicine, robotics and education.

During my talk, I will focus on three novel approaches to achieve efficient and robust nonverbal behavior modeling and recognition: (1) a new visual tracking framework (GAVAM) with automatic initialization and bounded drift which acquires online the view-based appearance of the object, (2) the use of latent-state models in discriminative sequence classification (Latent-Dynamic CRF) to capture the influence of unobservable factors on nonverbal behavior and (3) the integration of contextual information (specifically dialogue context) to improve nonverbal prediction and recognition.

Bio:
Dr. Louis-Philippe Morency is currently research professor at USC Institute for Creative Technologies where he leads the Nonverbal Behaviors Understanding project (ICT-NVREC). He received his Ph.D. from MIT Computer Science and Artificial Intelligence Laboratory in 2006. His main research interest is computational study of nonverbal social communication, a multi-disciplinary research topic that overlays the fields of multi-modal interaction, computer vision, machine learning, social psychology and artificial intelligence. He developed "Watson", a real-time library for nonverbal behavior recognition and which became the de-facto standard for adding perception to embodied agent interfaces. He received many awards for his work on nonverbal behavior computation including three best-paper awards in 2008 (at various IEEE and ACM conferences). He was recently selected by IEEE Intelligent Systems as one of the "Ten to Watch" for the future of AI research.

No comments: