Mixture-of-Agents Enhances Large Language Model Capabilities (arXiv, 2024)
Reasoning in Token Economies: Budget-Aware Evaluation of LLM Reasoning Strategies (EMNLP, 2024)
ReCaLL: Membership Inference via Relative Conditional Log-Likelihoods (EMNLP, 2024)
Raccoon: Prompt Extraction Benchmark of LLM-Integrated Applications (ACL Findings, 2024)
LLM-Resistant Math Word Problem Generation via Adversarial Attacks (EMNLP Findings, 2024)
NeuroComparatives: Neuro-Symbolic Distillation of Comparative Knowledge (NAACL Findings, 2024)
Maestro: A Gamified Platform for Teaching AI Robustness (EAAI, 2023)
Research Experience
Worked closely with Sameer Singh on Machine Learning Interpretations and Natural Language Processing projects before joining Duke. Also worked as a research intern at several companies.
Education
PhD in Computer Science at Duke University, advised by Bhuwan Dhingra; also advised by Sam Wiseman from 2022-2023.
Background
Research Interests: LLM reasoning, agents, and alignment. Has interned at multiple companies including Together AI, AWS, Intel, Tencent, and the Applied AI lab at Comcast.