* IntentBreaker: Intent-Adaptive Jailbreak Attack on Large Language Models (ECML PKDD, 2025)
* Selective Masking Adversarial Attack on Automatic Speech Recognition Systems (ICME, 2025)
* JBShield: Defending Large Language Models from Jailbreak Attacks through Activated Concept Analysis and Manipulation (USENIX Security Symposium, 2025)
* Zero-query Adversarial Attack on Black-box Automatic Speech Recognition Systems (CCS, 2024)
* Hijacking Attacks against Neural Networks by Analyzing Training Data (USENIX Security Symposium, 2024)
* Enhancing the Transferability of Adversarial Examples with Noise Injection Augmentation (ICME, 2024)
- Talks: ACM SIGSAC China Postgraduate Academic Forum on Cyberspace Security, 2025; JBShield: Defending Large Language Models from Jailbreak Attacks through Activated Concept Analysis and Manipulation, USENIX Security, 2025
Research Experience
- Ph.D. student at the School of Cyber Science and Engineering, Wuhan University, focusing on AI security
Education
- Degrees: B.E. in Communication Engineering, M.S. in Electronic Information
- Schools: Shandong University (B.E.), Wuhan University (M.S. & Ph.D.)
- Advisor: Prof. Qian Wang
- Timeline: Received B.E. in 2019, M.S. in 2022, currently a Ph.D. candidate
Background
- Research Interests: AI security, particularly adversarial robustness, safety alignment, and privacy in large language models
- Professional Field: Cybersecurity
- Brief Introduction: Ph.D. student at the School of Cyber Science and Engineering, Wuhan University, advised by Prof. Qian Wang of NIS&P Lab