Scholar
Guangyu Shen
Google Scholar ID: YiMTVwgAAAAJ
Purdue University
Adversarial Machine Learning
AI Security
ML Safety
Follow
Homepage
↗
Google Scholar
↗
Citations & Impact
All-time
Citations
1,378
H-index
20
i10-index
30
Publications
20
Co-authors
0
Contact
Email
shen447@purdue.edu
GitHub
Open ↗
Publications
5 items
From Poisoned to Aware: Fostering Backdoor Self-Awareness in LLMs
2025
Cited
0
ASTRA: Autonomous Spatial-Temporal Red-teaming for AI Software Assistants
2025
Cited
0
MGC: A Compiler Framework Exploiting Compositional Blindness in Aligned LLMs for Malware Generation
2025
Cited
0
AuthGuard: Generalizable Deepfake Detection via Language Guidance
2025
Cited
0
CENSOR: Defense Against Gradient Inversion via Orthogonal Subspace Bayesian Sampling
2025
Cited
0
Resume (English only)
Academic Achievements
Paper 'BAIT: Large Language Model Backdoor Scanning by Inverting Attack Target' accepted to IEEE S&P 2025
Paper 'ODSCAN: Backdoor Scanning for Object Detection Models' accepted to IEEE S&P 2024
Paper 'UNIT: Backdoor Mitigation via Automated Neural Distribution Tightening' accepted to ECCV 2024
Paper 'Django: Detecting Trojans in Object Detection Models via Gaussian Focus Calibration' accepted to NeurIPS 2023
Paper 'PICCOLO: Exposing Complex Backdoors in NLP Transformer Models' accepted to IEEE S&P 2022
Paper 'Constrained Optimization with Dynamic Bound-scaling for Effective NLP Backdoor Defense' accepted to ICML 2022
Paper 'Complex Backdoor Detection by Symmetric Feature Differencing' accepted to CVPR 2022
Paper 'Backdoor Scanning for Deep Neural Networks through K-Arm Optimization' accepted to ICML 2021
Won 1st place as winning attacker team ($250,000) in Amazon Nova AI Challenge (July 2025) as co-lead of PurCL team
Recipient of Bilsland Fellowship
Co-authors
0 total
Co-authors: 0 (list not available)
×
Welcome back
Sign in to Agora
Welcome back! Please sign in to continue.
Email address
Password
Forgot password?
Continue
Do not have an account?
Sign up