Computational Study Of Nonverbal Social Communication

Date: 
Monday, April 5, 2010 - 4:50pm

UCSB COMPUTER SCIENCE DEPARTMENT PRESENTS:

Friday, April 23, 2010
2:00 – 3:00
Computer Science Conference Room, Harold Frank Hall Rm. 1132

HOST: Matthew Turk

SPEAKER: Louis-Philippe Morency

Title: Computational Study Of Nonverbal Social Communication

Abstract:

The goal of this emerging research field is to recognize, model and
predict human nonverbal behavior in the context of interaction with
virtual humans, robots and other human participants. At the core of this
research field is the need for new computational models of human
interaction emphasizing the multi-modal, multi-participant and
multi-behavior aspects of human behavior. This multi-disciplinary
research topic overlaps the fields of multi-modal interaction, social
psychology, computer vision, machine learning and artificial
intelligence, and has many applications in areas as diverse as medicine,
robotics and education.

During my talk, I will focus on three novel approaches to achieve
efficient and robust nonverbal behavior modeling and recognition: (1) a
new visual tracking framework (GAVAM) with automatic initialization and
bounded drift which acquires online the view-based appearance of the
object, (2) the use of latent-state models in discriminative sequence
classification (Latent-Dynamic CRF) to capture the influence of
unobservable factors on nonverbal behavior and (3) the integration of
contextual information (specifically dialogue context) to improve
nonverbal prediction and recognition.

Bio:

Dr. Louis-Philippe Morency is currently a research assistant professor
at the University of Southern California (USC) and research scientist at
USC Institute for Creative Technologies where he leads the Multimodal
Communication and Computation Laboratory (MultiComp Lab). He received
his Ph.D. from MIT Computer Science and Artificial Intelligence
Laboratory in 2006. His main research interest is computational study of
nonverbal social communication, a multi-disciplinary research topic that
overlays the fields of multi-modal interaction, machine learning,
computer vision, social psychology and artificial intelligence. He
developed “Watson”, a real-time library for nonverbal behavior
recognition and which became the de-facto standard for adding perception
to embodied agent interfaces. He received many awards for his work on
nonverbal behavior computation including three best-paper awards in 2008
(at various IEEE and ACM conferences). He was recently selected by IEEE
Intelligent Systems as one of the “Ten to Watch” for the future of AI
research.