UCSB Computer Science PhD student, Semih Yavuz, in Prof. Xifeng Yan's group received the Best Paper Award at NeurIPS 2018 Workshop titled "Conversational AI: Today's Practice and Tomorrow's Potential". This NeurIPS workshop focuses on conversational systems and natural language interfaces (such as Siri, Google Now, Cortana, and Alexa) which have become commonplace in the span of only a few years. The goal of this workshop is to bring together researchers and practitioners in this area, to clarify impactful research problems, and generate new ideas and directions for future lines of research through presentation of accepted forward-looking papers that propose interesting and impactful contributions as well as invited talks given by senior technical leaders from both academia and industry.

Title: DeepCopy: Grounded Response Generation with Hierarchical Pointer Networks

Authors: Semih Yavuz (UCSB), Abhinav Rastogi (Google AI), Guan-Lin Chao (CMU), Dilek Hakkani-Tur (Amazon Alexa AI)

Recent advances in neural sequence-to-sequence models have led to promising results for several language generation-based tasks, including dialogue response generation, summarization, and machine translation. However, these models are known to have several problems, especially in the context of chit-chat based dialogue systems: they tend to generate short and dull responses that are often too generic. Furthermore, these models do not ground conversational responses on knowledge and facts, resulting in turns that are not accurate, informative and engaging for the users. These indeed are the essential features that dialogue response generation models should be equipped with to serve in more realistic and useful conversational applications. Recently, several dialogue datasets accompanied with relevant external knowledge [29, 5] have been released to facilitate research into remedying such issues encountered by traditional models by resorting to this additional information. In this paper, we propose and experiment with a series of response generation models that aim to serve in the general scenario where in addition to the dialogue context, relevant unstructured external knowledge in the form of text is also assumed to be available for models to harness. Our approach extends pointer-generator networks [18] by allowing the decoder to hierarchically attend and copy from external knowledge in addition to the dialogue context. We empirically show the effectiveness of the proposed model compared to several baselines including [6, 29] on CONVAI2 challenge.