Publications: 'How Do Vision-Language Models Process Conflicting Information Across Modalities?' (NEMI'25); 'mOthello: When Do Cross-Lingual Representation Alignment and Cross-Lingual Transfer Emerge in Multilingual Models?' (Findings of NAACL 2024)
Research Experience
Currently investigating the modularity of representations and circuits in transformer models. Officially joined the LUNAR lab as a PhD student starting Fall 2025.
Education
2025-Present PhD in Computer Science, Brown University; 2023-2025 Sc.M. in Computer Science, Brown University; 2019-2023 B.S. in Computer Science & Philosophy, Tufts University
Background
PhD student in Computer Science at Brown University, with research interests in interpretability and human-model comparison. Believes that attributing cognitive properties to computational models requires two components: operationalized definitions of those cognitive properties and sufficient knowledge of the inner workings of the computational models of interest.
Miscellany
Links to CV, Twitter, GitHub, Google Scholar, and Semantic Scholar are provided on the personal website