Zhen Xiang
Scholar

Zhen Xiang

Google Scholar ID: BhM3udQAAAAJ
University of Georgia
machine learning
Citations & Impact
All-time
Citations
1,805
 
H-index
21
 
i10-index
29
 
Publications
20
 
Co-authors
0
 
Resume (English only)
Academic Achievements
  • Jan 2025: One paper accepted by ICLR 2025
  • Oct 2024: One paper accepted by NeuroComputing
  • Sep 2024: Two papers accepted by NeurIPS 2024
  • Jun 2024: One paper accepted by IROS 2024 (oral)
  • May 2024: Proposal 'The LLM and Agent Safety Competition 2024' accepted to NeurIPS 2024 Competition Track
  • May 2024: Paper 'ArtPrompt: ASCII Art-based Jailbreak Attacks against Aligned LLMs' accepted by ACL 2024
  • Jan 2024: Paper 'BIC-based Mixture Model Defense against Data Poisoning Attacks on Classifiers: A Comprehensive Study' accepted by TKDE
  • Jan 2024: Paper 'BadChain: Backdoor Chain-of-Thought Prompting for Large Language Models' accepted by ICLR 2024
  • Serving as Associate Editor for IEEE TCSVT from Jan 2024 to Dec 2025
  • Sep 2023: Paper 'CBD: A Certified Backdoor Detector Based on Local Dominant Probability' accepted by NeurIPS 2023
  • Jul 2023: Paper 'MMBD: Post-Training Detection of Backdoor Attacks with Arbitrary Backdoor Pattern Types Using a Maximum Margin Statistic' accepted by IEEE S&P 2024
  • Jul 2023: Organizing 'The Trojan Detection Challenge 2023 (LLM Edition)'
  • May 2023: Paper 'UMD: Unsupervised Model Detection for X2X Backdoor Attacks' accepted by ICML 2023
  • Apr 2023: Co-authored book 'Adversarial Learning and Secure AI' accepted by Cambridge University Press (published Dec 2023)
  • Dec 2022: Organized the first IEEE Trojan Removal Competition
Background
  • Assistant Professor at the School of Computing, University of Georgia
  • Research interests include trustworthy machine learning, large foundation models, and AI agents
  • Recent research focuses on AI agents powered by large foundation models, covering:
  • - Deployment of AI agents in healthcare, autonomy, education, and science
  • - Safety and security of AI agents in high-stakes applications
  • - Development of guardrail agents addressing safety, privacy, and fairness issues in AI applications
Co-authors
0 total
Co-authors: 0 (list not available)