- “AD-AGENT, a multi-agent LLM framework for anomaly detection” accepted to IJCNLP-AACL 2025 Findings
- New paper on LLM routing
- Two new papers accepted to EMNLP Findings 2025: one on causal methods for hallucination mitigation (Treble Counterfactual VLMs) and another introducing a benchmark for NLP anomaly detection (NLP-ADBench)
- Published a study on benchmarking personalized conversational reasoning for LLMs (PersonaConvBench)
- “AD-LLM: Benchmarking Large Language Models for Anomaly Detection” accepted to ACL 2025 Findings
Research Experience
Conducting research work at FORTIS Lab.
Education
Currently a PhD student in Computer Science at the University of Southern California (USC), supervised by Prof. Yue Zhao; previously earned a Master’s degree in Machine Learning and Data Science from USC; Bachelor’s degree in Software Engineering from Nankai University, China.
Background
Research focuses on anomaly detection, diversity in generation, synthetic generation, and trust & safety in large language models and agents.