Published several papers, including 'AdvChain: Adversarial Chain-of-Thought Tuning for Robust Safety Alignment of Large Reasoning Models' and 'To Think or Not to Think: Exploring the Unthinking Vulnerability in Large Reasoning Models'. Additionally, an open-source project 'Loong: Synthesize Long Chain-of-Thoughts at Scale through Verifiers' has been accepted to NeurISP 2025 workshop.
Research Experience
Involved in multiple research projects related to AI security, including robust safety alignment of large reasoning models, defenses and attacks in adversarial machine learning.
Education
Currently a Ph.D. student in Data Science at The Chinese University of Hong Kong, Shenzhen, under the supervision of Prof. Baoyuan Wu; received Master’s degree from the Institute of Information Engineering at the University of Chinese Academy of Sciences in 2021.
Background
Research interests include: safety of large language models, data safety in AI systems, and safety in embodied AI. Currently on the job market and seeking full-time opportunities in academia or industry.
Miscellany
On the job market and seeking full-time opportunities in academia or industry.