Scholar
Matthew Jagielski
Google Scholar ID: _8rw_GMAAAAJ
Anthropic
adversarial machine learning
differential privacy
security
Follow
Homepage
↗
Google Scholar
↗
Citations & Impact
All-time
Citations
13,200
H-index
33
i10-index
41
Publications
20
Co-authors
17
list available
Contact
CV
Open ↗
GitHub
Open ↗
Publications
16 items
Curation Leaks: Membership Inference Attacks against Data Curation for Machine Learning
2026
Cited
0
Thought-Transfer: Indirect Targeted Poisoning Attacks on Chain-of-Thought Reasoning Models
2026
Cited
0
Extracting alignment data in open models
2025
Cited
0
SoK: Data Minimization in Machine Learning
2025
Cited
0
Black-Box Privacy Attacks on Shared Representations in Multitask Learning
2025
Cited
0
Cascading Adversarial Bias from Injection to Distillation in Language Models
2025
Cited
0
Strong Membership Inference Attacks on Massive Datasets and (Moderately) Large Language Models
2025
Cited
0
Covert Attacks on Machine Learning Training in Passively Secure MPC
2025
Cited
0
Load more
Resume (English only)
Co-authors
17 total
Nicholas Carlini
Anthropic
Florian Tramèr
Assistant Professor of Computer Science, ETH Zurich
Alina Oprea
Northeastern University
Katherine Lee
Researcher, OpenAI
Eric Wallace
UC Berkeley
Cristina Nita-Rotaru
Professor, Khoury College of Computer Science, Northeastern University
Nicolas Papernot
University of Toronto and Vector Institute
Jonathan Ullman
Associate Professor of Computer Science, Northeastern University
×
Welcome back
Sign in to Agora
Welcome back! Please sign in to continue.
Email address
Password
Forgot password?
Continue
Do not have an account?
Sign up