Stanley Wu
Scholar

Stanley Wu

Google Scholar ID: wkis3pgAAAAJ
PhD Student, University of Chicago
SecurityPrivacyMachine Learning
Citations & Impact
All-time
Citations
192
 
H-index
6
 
i10-index
4
 
Publications
11
 
Co-authors
0
 
Resume (English only)
Academic Achievements
  • Papers published:
  • - 2025: Paper on diffusion model poisoning via VLM adversarial examples accepted to CCS '25
  • - 2024: Disrupting Style Mimicry Attacks on Video Imagery, preprint
  • - 2024: Nightshade: Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models, IEEE Symposium on Security and Privacy (Oakland)
  • - 2024: TMI! Finetuned Models Leak Private Information from their Pretraining Data, Privacy Enhancing Technologies Symposium (PETS)
  • - 2023: How to Combine Membership-Inference Attacks on Multiple Updated Models, Privacy Enhancing Technologies Symposium (PETS)
Research Experience
  • Before joining the University of Chicago, he spent an excellent year working as a data scientist for Klaviyo.
Education
  • Received a bachelor's degree in computer science from Northeastern University in 2023, during which he worked with Alina Oprea and Jonathan Ullman; currently a 3rd year Ph.D. student at the University of Chicago SAND Lab, co-advised by Ben Zhao and Heather Zheng.
Background
  • Primary academic interest lies in adversarial machine learning, with a particular focus on security issues with generative AI. Recently, he has been studying the safety limitations of generative models and developing methods to protect human creatives against intrusive training.
Miscellany
  • His personal website includes links to his Google Scholar, LinkedIn, GitHub, Twitter, and goodreads accounts.
Co-authors
0 total
Co-authors: 0 (list not available)