Large Scale Scene Matching for Graphics and Vision

Date: 
Saturday, February 21, 2009 - 10:46am

UCSB COMPUTER SCIENCE DEPARTMENT PRESENTS:
THURSDAY, MARCH 19, 2009
10:00 – 11:00
CS Conference Room, Room 1132

HOST: Yuan-Fang Wang

SPEAKER: James Hays
Carnegie Mellon University

Title: Large Scale Scene Matching for Graphics and Vision

Abstract:

The complexity of the visual world makes it difficult for computer
vision to understand images and for computer graphics to synthesize
visual content. The traditional computer vision or computer graphics
pipeline mitigates this complexity with a bottom up, divide and conquer
strategy (e.g. segmenting then classifying, assembling part-based
models, or using scanning window detectors). In this talk I will discuss
research that is fundamentally different, enabled by the observation
that while the space of images is infinite, the space of “scenes” might
not be astronomically large. With access to imagery on an Internet
scale, for most images there exist numerous semantically and
structurally similar scenes. My research is focused on exploiting and
refining large scale scene matching to short circuit the typical
computer vision and graphics pipelines for tasks such as scene
completion, image geolocation, object detection, and high interest photo
selection.

Bio:

James Hays received his B.S. in Computer Science from Georgia Institute
of Technology in 2003. He is a Ph.D. student in Carnegie Mellon
University’s Computer Science Department and is advised by Alexei A.
Efros. His research interests are in computer vision and computer
graphics, focusing on image understanding and manipulation leveraging
massive amounts of data. His research has been supported by a National
Science Foundation Graduate Research Fellowship. He will graduate this
summer.