Published multiple papers, including 'SafeSearch: Do Not Trade Safety for Utility in LLM Search Agents' and 'Visual Backdoor Attacks on MLLM Embodied Decision Making via Contrastive Trigger Learning'; Recognized as 'Top Reviewer' at NeurIPS 2025; Reviewed for ARR, NeurIPS, and ICLR.
Research Experience
Applied Scientist Intern at Amazon, Data Scientist Intern at Microsoft, Research Scientist Intern at JD.COM Silicon Valley Research Center, Applied Scientist Intern at ByteDance.
Education
Ph.D. student at the University of Illinois Urbana-Champaign (UIUC), advised by Prof. Daniel Kang; Master's degree from UIUC, advised by Prof. Heng Ji; Bachelor's degree from Peking University, advised by Prof. Sujian Li.
Background
Research focuses on developing safe (multimodal) Large Language Models (LLMs) and LLM agents for real-world deployment, with an emphasis on identifying and mitigating safety vulnerabilities. Studied a wide range of safety risks in LLMs and LLM agents, including fine-tuning vulnerabilities, indirect prompt injection attacks, multimodal RAG knowledge poisoning, and backdoor attacks. Explored reinforcement learning approaches to enhance their safety without compromising utility.
Miscellany
This website is based on a template created by Jon Barron. Last updated: Oct 30, 2025.