- MELON: Indirect Prompt Injection Defense via Masked Re-execution and Tool Comparison, ICML 2025
- DyVal: Graph-informed Dynamic Evaluation of Large Language Models, ICLR (Spotlight) 2024
- PromptBench: Towards Evaluating the Robustness of Large Language Models on Adversarial Prompts, CCS LAMPS Workshop 2023
- Improving Generalization of Adversarial Training via Robust Critical Fine-Tuning, ICCV 2023
- DyVal 2: Dynamic Evaluation of Large Language Models by Meta Probing Agents, ICML 2024
- Awards:
- MELON accepted by ICML 2025
- Hosting the AAAI 2025 Tutorial on Evaluating Large Language Models: Challenges and Methods with Prof. Jindong Wang, Dr. Linyi Yang, Prof. Yue Feng, and Prof. Yue Zhang
- Selected to present a talk at the KAUST Rising Stars in AI Symposium 2025
- PromptRobust accepted by CCS LAMPS Workshop
- DyVal 2 accepted by ICML 2024
Research Experience
- Internship at Microsoft
- Advisors: Prof. Jindong Wang and Prof. Xing Xie
Education
- Degree: Ph.D. in progress
- University: University of California, Santa Barbara (UCSB)
- Advisors: Prof. William Wang and Prof. Wenbo Guo
- Time: Current
- Field: Not specified
Background
- Research Interests: Development of trustworthy AI systems and evaluation of foundation models
- Advisors: Prof. William Wang and Prof. Wenbo Guo
- Personal Background: First-year Ph.D. student at UCSB, previously interned at Microsoft, advised by Prof. Jindong Wang and Prof. Xing Xie