Publications: 'Do Natural Language Descriptions of Model Activations Convey Privileged Information?' (arXiv, 2025); 'Multi-Field Adaptive Retrieval' accepted to ICLR 2025 as a spotlight (top 5%); 'Function Vectors in Large Language Models' accepted to ICLR 2024; 'Summarizing, Simplifying, and Synthesizing Medical Evidence using GPT-3 (with Varying Success)' accepted to ACL 2023; Awards: 2022 NSF Graduate Research Fellowship; Honorable Mention for the 2021 NSF Graduate Research Fellowship competition.
Research Experience
AI Resident at FAIR/Meta AI, working with Marjan Ghazvininejad and Mike Lewis; Intern at Microsoft Research, collaborating with Tristan Naumann; Undergraduate research at the University of Washington with Shwetak Patel (ubiquitous computing) and Noah Smith (natural language processing).
Education
PhD: Northeastern University, Advisor: Byron Wallace, Started: Fall 2022; Bachelor's Degree: University of Washington, Advisors: Shwetak Patel and Noah Smith.
Background
Research Interests: Language model behaviors; Specialization: Natural Language Processing, Machine Learning; Brief Introduction: A PhD student at Northeastern University, advised by Byron Wallace. Focuses on how language models can generalize precisely on unseen data and why they pick up certain behaviors.
Miscellany
Other personal interests or hobbies are not explicitly mentioned.