- Premise-Augmented Reasoning Chains Improve Error Identification in Math reasoning with LLMs
- Unsupervised Human Preference Learning
- Democratizing LLMs: An Exploration of Cost-Performance Trade-offs in Self-Refined Open-Source Models
- Scaling Laws For Natural Language Planning Models
- Token Efficient Deep Conversational Reasoning With ConvoDAGs
Research Experience
- Sandia National Laboratories, May 2024 - Present, Research Intern, Focus: Retrieval Augmented Generation, Large Corpus Analysis; - ConvAI Lab, Aug 2023 - Present, Research Assistant, Focus: Scientific Reasoning in LLMs; - BLENDER Lab, Jun 2023 - Present, Research Assistant, Focus: Domain Agnostic Self-Refinement in LLMs; - Nference Inc., May 2023 - Aug 2023, Machine Learning Intern, Focus: Tumor Detection using Deep Learning; - Jane Street, May 2023, SEE Trading Fellow, Focus: Trading Algorithms.
Education
Stanford University, Sep 2025 - Present, Master’s in Computer Science; University of Illinois at Urbana-Champaign, Aug 2021 - May 2025, B.S. in Computer Science, James Scholar, GPA: 3.95/4.00.
Background
Research interests: preference learning and reasoning in large language models. Advisors: Prof. Dilek Hakkani-Tür and Prof. Heng Ji.
Miscellany
CV available for download on the personal website.