headshot of Denny, wearing black rimmed glasses and smiling

Speaker: Denny Zhou

Date: Monday, January 16th, 2023

Time: 3:30 - 4:30 pm

Location: HH 1010

Host: William Wang

Title: Teach Language Models to Reason

Abstract: Can we teach language models like teaching humans to solve unseen NLP/ML tasks by only providing a few examples? Recently, by combining with Google's newest large language model PaLM-540B, chain-of-thought prompting (CoT) integrated with self-consistency decoding (SC) has demonstrated striking performance on many challenging NLP/ML tasks -- greatly outperforms sota results in the literature which typically require 100x - 1000x more annotated examples and train task-specific models. Moreover, the results by CoT + SC are fully interpretable. More recently, least-to-most prompting is developed to further enable large language models to achieve challenging easy-to-hard generalization, in particular, compositional generalization. The reasoning research work was explained in Google I/O 2022 by Google CEO Sundar Pichai. In this talk, Denny Zhou will present these progresses, and discuss their implications to the future of AI research. 

Bio: Denny Zhou is a Principal Scientist / Research Director in Google Brain. He founded and leads the Reasoning team of Google Brain. His work on reasoning includes chain-of-thought prompting, self-consistency, least-to-most prompting, and instruction finetuning (FLAN2). Denny also led SpreadSheetCoder which has been integrated to Google Sheets to automatically generate formulas for users, and MobileBERT which has been widely adopted  in mobile apps as a highly compact & fast language model. Denny received Google Research Impact Award and WSDM2022 Test of Time Award. Denny has also been serving as (senior) area chairs for NeurIPS, ICML and ICLR every year. For more information, see his homepagescholar and twitter.