Publications: Work with Google DeepMind covered by VentureBeat; Paper Awards: CoLM'24 Outstanding Paper (Reverse Engineering Knowledge Cutoffs), SIGIR’24 Best Paper Nominee (Evaluating Long-Form Report Generation), ECIR’25 Honorable Mention (Evaluating instructions in IR); Project Contributions: Over 25 million downloads of ModernBERT, developer of Promptriever and Rank1/Rank-K.
Research Experience
Interning at Meta's Superintelligence Lab FAIR language team in Fall 2025, under Xilun Chen and Scott Yih. Previously interned at Google DeepMind, Samaya AI, AI2, and Apple.
Education
PhD student, Johns Hopkins University, Center for Language and Speech Processing; Advisors: Benjamin Van Durme and Dawn Lawrie; Time: Expected to graduate in Fall 2025; Specialization: Language Models and Information Retrieval.
Background
Research interests lie at the intersection of language models and information retrieval, working to improve how models find, understand, and synthesize information. Recent focus areas include: agentic search/improved retrieval, evaluation and measurement, and pre-training (decoders, encoders, multilingual). Currently a final-year PhD student at the Center for Language and Speech Processing at Johns Hopkins University.