Conference on Empirical Methods in Natural Language Processing · 2024
Cited
14
Resume (English only)
Academic Achievements
Interleaved Reasoning for Large Language Models via Reinforcement Learning Preprint, 2025
Language Models (Mostly) Know When to Stop Reading NeurIPS, 2025
Improving Model Alignment Through Collective Intelligence of Open-Source Models ICML, 2025
ReCaLL: Membership Inference via Relative Conditional Log-Likelihoods EMNLP, 2024
Adversarial Math Word Problem Generation EMNLP Findings, 2024
Extracting Lexical Features from Dialects via Interpretable Dialect Classifiers NAACL, 2024
Research Experience
Before Duke, worked with Antonios Anastasopoulos on low-resource NLP.
Education
Second-year Ph.D. student in Computer Science at Duke University, advised by Bhuwan Dhingra.
Background
Research interests: Post-training methods for LLMs, particularly in developing efficient algorithms to improve LLMs' reasoning ability, especially for complex multi-step agentic tasks.
Miscellany
Supported by the 2025 Apple Scholars in AI/ML Ph.D. Fellowship and the 2024 National Science Foundation Graduate Research Fellowship.