LoRA and Privacy: When Random Projections Help (and When They Don't)

📅 2026-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether noiseless random projections—such as those used in Low-Rank Adaptation (LoRA)—can provide differential privacy (DP) guarantees, with a focus on the distinction between vector and matrix queries. Through theoretical analysis of the Wishart projection mechanism, we establish for the first time that it inherently satisfies DP for vector queries, yet fails to offer meaningful privacy for matrix queries like LoRA, rendering them vulnerable to high-accuracy membership inference attacks (AUC > 0.99). To address this, we propose a noisy variant of low-rank fine-tuning and demonstrate that the low-rank structure enables privacy amplification: under identical noise levels, it achieves stronger DP guarantees than full-parameter fine-tuning, and tighter privacy accounting permits reduced noise injection, thereby improving utility.

Technology Category

Application Category

📝 Abstract
We introduce the (Wishart) projection mechanism, a randomized map of the form $S \mapsto M f(S)$ with $M \sim W_d(1/r I_d, r)$ and study its differential privacy properties. For vector-valued queries $f$, we prove non-asymptotic DP guarantees without any additive noise, showing that Wishart randomness alone can suffice. For matrix-valued queries, however, we establish a sharp negative result: in the noise-free setting, the mechanism is not DP, and we demonstrate its vulnerability by implementing a near perfect membership inference attack (AUC $>0.99$). We then analyze a noisy variant and prove privacy amplification due to randomness and low rank projection, in both large- and small-rank regimes, yielding stronger privacy guarantees than additive noise alone. Finally, we show that LoRA-style updates are an instance of the matrix-valued mechanism, implying that LoRA is not inherently private despite its built-in randomness, but that low-rank fine-tuning can be more private than full fine-tuning at the same noise level. Preliminary experiments suggest that tighter accounting enables lower noise and improved accuracy in practice.
Problem

Research questions and friction points this paper is trying to address.

LoRA
differential privacy
random projections
membership inference
privacy vulnerability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Wishart projection
differential privacy
LoRA
privacy amplification
low-rank fine-tuning
🔎 Similar Papers
No similar papers found.
Y
Yaxi Hu
Max Planck Institute for Intelligent Systems, Tübingen, Germany
J
Johanna Dungler
Department of Computer Science, University of Copenhagen
B
Bernhard Scholkopf
Max Planck Institute for Intelligent Systems, Tübingen, Germany
Amartya Sanyal
Amartya Sanyal
University of Copenhagen
PrivacyMachine LearningAdversarial LearningLearning TheoryRobustness