Published multiple papers including 'OpaqueToolsBench: Learning Nuances of Tool Behavior Through Interaction' (submitted to ICLR 2026), 'The Surprising Effectiveness of Membership Inference with Simple N-Gram Metrics' (CoLM 2025), 'Prismatic Synthesis: Gradient-based Data Diversification Boosts Generalization in LLM Reasoning' (NeurIPS 2025), and others.
Research Experience
Completed research internships at Samaya AI, Apple, and Amazon AWS. Research focuses on data-driven and algorithmic approaches to building language models with stronger, more reliable reasoning abilities.
Education
Completed a B.S./M.S. in computer science at the University of Washington, along with a B.S. in bioengineering. Currently pursuing a Ph.D. in Computer Science at the University of Southern California, advised by Sai Praneeth Karimireddy and Xiang Ren.
Background
Ph.D. student in Computer Science, focusing on data-driven and algorithmic approaches to building language models with stronger, more reliable reasoning abilities. Research areas include understanding how data shapes model behavior, lightweight yet effective methods to advance reasoning, and enabling autonomous reasoning.