- EMNLP 2025: Dynamic Evaluation for Oversensitivity in LLMs
- COLM 2025: THOUGHT TERMINATOR: Benchmarking, Calibrating, and Mitigating Overthinking in Reasoning Models
- NAACL 2025 (Oral): B^4: A Black-Box Scrubbing Attack on LLM Watermarks
Other academic activities include an internship at AWS, Santa Clara.
Research Experience
Currently a second-year CS PhD student at the University of California, Santa Barbara, advised by Prof. William Wang. Research projects include trustworthy AI, efficiency, and other related areas.
Education
Bachelor’s degree from Peking University, advised by Prof. Xiaojun Wan. Also worked with Prof. Tianxing He at Tsinghua University and Prof. Yulia Tsvetkov at the University of Washington.
Background
Research interests lie broadly in Language and Vision, particularly in: Trustworthy AI (detecting machine-generated text, attacking LLM watermarks, evaluating oversensitivity in LLMs), Efficiency (prompt compression, overthinking in reasoning models), and Other (extrinsic evaluation for text summaries). Currently a second-year CS PhD student at the University of California, Santa Barbara.
Miscellany
Actively looking for motivated undergraduate or master’s students to collaborate on exciting topics such as multimodal evaluation, reasoning, and more.