Papers published: 'Improving Large Language Models Function Calling and Interpretability via Guided-Structured Templates' (EMNLP 2025); 'Optimizing Decomposition for Optimal Claim Verification' (ACL 2025); 'Embedding Mental Health Discourse for Community Recommendation' (CODI-ACL 2023); Preprint: 'A Quantitative Review on Language Model Efficiency Research'.
Research Experience
Worked in DM2 Lab directed by Prof. Meng Jiang (since August 2022); Applied Scientist Intern at Amazon (starting September 2024).
Education
Ph.D. Student in Computer Science & Engineering at University of Notre Dame, supervised by Prof. Meng Jiang (2022-present); B.S. in Computer Science and B.S. in Mathematics from Texas Christian University, GPA 4.0 (Summa Cum Laude) (December 2021).
Background
Research Interests: Natural Language Processing and Data Mining. Field of Study: Computer Science & Engineering.
Miscellany
Quote: I have not failed. I’ve just found 10,000 ways that won’t work. — Thomas A. Edison.