Recognized with multiple awards, including best method paper at NAACL 2022, outstanding paper at ACL 2023, and outstanding paper at EMNLP 2023. Also awarded the NSERC PGS-D fellowship and received an honorable mention for the NSF GRFP.
Research Experience
Interned at the Allen Institute for AI on the Mosaic team and at Microsoft Research in the Natural Language Processing Group.
Education
Received BSc (Honours) in Computer Science from the University of British Columbia in 2017; obtained PhD from the University of Washington in 2024 under the supervision of Yejin Choi.
Background
Assistant Professor, with research interests in natural language processing and artificial intelligence, particularly the capabilities and limits of large language models. His lab is exploring the divergence between model capabilities and human intuitions, the unpredictability and creativity of LLMs, and decoding-time algorithms to infuse models with new capabilities without fine-tuning.
Miscellany
Enjoys cooking, making bread, pasta, ice cream, and cocktails, and loves all kinds of movies and music.