InCoder: A Generative Model for Code Infilling and Synthesis (ICLR 2023 spotlight, top 6%)
Automatic Correction of Human Translations (NAACL 2022 Best Task Paper, Best Resource Paper, Best Theme Paper Honorable Mention)
UniMASK: Unified Inference in Sequential Decision Problems (NeurIPS 2022 oral, top 1.8%)
Research Experience
Worked on research and product at Lilt, focusing on continual adaptation and human-in-the-loop machine translation; spent a summer with the Natural Language Understanding group at Google Research NY working on long-context memory architectures.
Education
Double-major in Computer Science and Philosophy from MIT, where she conducted research with the Computational Cognitive Science Group, advised by Kelsey Allen and Josh Tenenbaum; PhD student at Berkeley AI Research, advised by Anca Dragan and Dan Klein.
Background
Aiming to build AI that can augment humans, making people smarter and able to accomplish things they couldn’t do before. Research interests include interactive models, collaborative environments, and continual learning & memory.