Shenyi Zhang
Scholar

Shenyi Zhang

Google Scholar ID: xj4Mxp8AAAAJ
Wuhan University
AI SecurityAdversarial Machine LearningLarge Language Models
Citations & Impact
All-time
Citations
140
 
H-index
4
 
i10-index
3
 
Publications
8
 
Co-authors
0
 
Resume (English only)
Academic Achievements
  • - Publications:
  • * IntentBreaker: Intent-Adaptive Jailbreak Attack on Large Language Models (ECML PKDD, 2025)
  • * Selective Masking Adversarial Attack on Automatic Speech Recognition Systems (ICME, 2025)
  • * JBShield: Defending Large Language Models from Jailbreak Attacks through Activated Concept Analysis and Manipulation (USENIX Security Symposium, 2025)
  • * Zero-query Adversarial Attack on Black-box Automatic Speech Recognition Systems (CCS, 2024)
  • * Hijacking Attacks against Neural Networks by Analyzing Training Data (USENIX Security Symposium, 2024)
  • * Enhancing the Transferability of Adversarial Examples with Noise Injection Augmentation (ICME, 2024)
  • * Perception-driven Imperceptible Adversarial Attack against Decision-based Black-box Models (TIFS, 2024)
  • * Black-box Adversarial Attacks on Commercial Speech Platforms with Minimal Information (CCS, 2021)
  • - Conference Reviewers: ACM MM 2025, ICME 2024/2025, IJCNN 2025, AVSS 2025
  • - Journal Reviewers: TIFS, TDSC, TETC, TON, TCPS, TOIT, CVIU, KBS, Neurocomputing
  • - Talks: ACM SIGSAC China Postgraduate Academic Forum on Cyberspace Security, 2025; JBShield: Defending Large Language Models from Jailbreak Attacks through Activated Concept Analysis and Manipulation, USENIX Security, 2025
Research Experience
  • - Ph.D. student at the School of Cyber Science and Engineering, Wuhan University, focusing on AI security
Education
  • - Degrees: B.E. in Communication Engineering, M.S. in Electronic Information
  • - Schools: Shandong University (B.E.), Wuhan University (M.S. & Ph.D.)
  • - Advisor: Prof. Qian Wang
  • - Timeline: Received B.E. in 2019, M.S. in 2022, currently a Ph.D. candidate
Background
  • - Research Interests: AI security, particularly adversarial robustness, safety alignment, and privacy in large language models
  • - Professional Field: Cybersecurity
  • - Brief Introduction: Ph.D. student at the School of Cyber Science and Engineering, Wuhan University, advised by Prof. Qian Wang of NIS&P Lab
Co-authors
0 total
Co-authors: 0 (list not available)