Answering questions about text frequently requires aggregating information from multiple places in that text. Self-attention language models, such as Google's BERT language model have achieved state-of-the-art results in question answering tasks over a single sentence, or those that do not require multi-hop reasoning. In this work, we report the findings and feasibility of building end-toend neural models on self-attention based language models for a variety of question answering settings, including knowledge graph based and text based data, and varying answer availability settings.
Date:Wednesday, May 29, 2019 - 4:00pm to 5:00pm
Title:Multi-hop Reasoning for Question Answering with Self-attention Networks
Speaker:Nidhi Ravi Hiremath
Committee:William Wang (Chair), Tao Yang