Zachary Coalson
Scholar

Zachary Coalson

Google Scholar ID: AFAZZgkAAAAJ
PhD Student of Computer Science, Oregon State University
machine learningsecurityprivacy
Citations & Impact
All-time
Citations
12
 
H-index
2
 
i10-index
0
 
Publications
7
 
Co-authors
1
list available
Resume (English only)
Academic Achievements
  • Paper 'Using influence functions to reduce language model toxicity' accepted to NeurIPS 2025
  • Paper 'Characterizing the resilience of LLM inference to random bit-wise faults' accepted to SC 2025
  • Paper 'Improving the efficiency of VLN' accepted to ICCV 2025
  • Paper 'Jailbreaking large language models with adversarial bit-flips' posted on arXiv
  • Paper 'Poisoning attacks against neural architecture search' posted on arXiv
  • First publication 'Auditing multi-exit language models to adversarial slowdown' accepted to NeurIPS 2023
  • Awarded 2025 GEM Fellowship
  • Recipient of 2025 NSF GRFP
  • Received ARCS Foundation Oregon Scholar Award (2023)
Background
  • First-year PhD student in Computer Science at Oregon State University
  • Researching trustworthy and socially responsible machine learning under the supervision of Professor Sanghyun Hong
  • Member of the Trustworthy and Responsible AI Lab (TRUE)
  • Aims to audit and improve the robustness of ML systems against adversarial threats, misuse, and undesirable behaviors
  • Currently focused on improving trustworthiness of large language models, e.g., reducing toxicity and studying new jailbreaking techniques