🤖 AI Summary
This work investigates whether noiseless random projections—such as those used in Low-Rank Adaptation (LoRA)—can provide differential privacy (DP) guarantees, with a focus on the distinction between vector and matrix queries. Through theoretical analysis of the Wishart projection mechanism, we establish for the first time that it inherently satisfies DP for vector queries, yet fails to offer meaningful privacy for matrix queries like LoRA, rendering them vulnerable to high-accuracy membership inference attacks (AUC > 0.99). To address this, we propose a noisy variant of low-rank fine-tuning and demonstrate that the low-rank structure enables privacy amplification: under identical noise levels, it achieves stronger DP guarantees than full-parameter fine-tuning, and tighter privacy accounting permits reduced noise injection, thereby improving utility.
📝 Abstract
We introduce the (Wishart) projection mechanism, a randomized map of the form $S \mapsto M f(S)$ with $M \sim W_d(1/r I_d, r)$ and study its differential privacy properties. For vector-valued queries $f$, we prove non-asymptotic DP guarantees without any additive noise, showing that Wishart randomness alone can suffice. For matrix-valued queries, however, we establish a sharp negative result: in the noise-free setting, the mechanism is not DP, and we demonstrate its vulnerability by implementing a near perfect membership inference attack (AUC $>0.99$). We then analyze a noisy variant and prove privacy amplification due to randomness and low rank projection, in both large- and small-rank regimes, yielding stronger privacy guarantees than additive noise alone. Finally, we show that LoRA-style updates are an instance of the matrix-valued mechanism, implying that LoRA is not inherently private despite its built-in randomness, but that low-rank fine-tuning can be more private than full fine-tuning at the same noise level. Preliminary experiments suggest that tighter accounting enables lower noise and improved accuracy in practice.