We have envisioned that one day computers will understand natural language and actively predict what and when we need help in order to proactively complete tasks on our behalf. When computers become more ubiquitous (e.g. wearable devices) the number of applications increases. There has been a tremendous investment in virtual personal assistants from the industry (Microsoft's Cortana, Google Assistant, Apple's Siri, Amazon's Echo, and Facebook's Bot). In this talk, we will give an overview of dialogue systems, describe the system architecture and the key components. Then we will highlight challenges and recent trends driven by deep learning and big data, and discuss the systems' potential to fully redefine the human-computer interaction moving forward.
Yun-Nung (Vivian) Chen is an assistant professor in the Department of Computer Science and Information Engineering at National Taiwan University. Her research interests include language understanding, dialogue systems, natural language processing, deep learning, and multimodality. She received the Google Faulty Research Awards 2016, two Best Student Paper Awards from IEEE ASRU 2013 and IEEE SLT 2010, and a Student Best Paper Nominee from INTERSPEECH 2012. Chen earned the Ph.D. degree from School of Computer Science at Carnegie Mellon University, Pittsburgh in 2015. Prior to joining National Taiwan University, she worked for Microsoft Research in the Deep Learning Technology Center.