🤖 AI Summary
This work addresses the vulnerability of fine-tuned large language models to training data extraction (TDE) attacks, where existing defenses either lack formal privacy guarantees or incur significant performance degradation. The authors propose SCP-Δr, an algorithm grounded in the observation that effective fine-tuning requires preserving only a small set of critical token-level probability shifts. SCP-Δr uniquely integrates relative probability modeling with a base-model-guided token-level smoothing mechanism within the Nearly Access-Free (NAF) framework to achieve strong privacy protection. The method substantially improves theoretical privacy bounds—by several orders of magnitude over prior NAF approaches—while demonstrating robustness against multiple TDE attack benchmarks and maintaining near-perfect task performance.
📝 Abstract
Fine-tuning large language models (LLMs) on sensitive datasets raises privacy concerns, as training data extraction (TDE) attacks can expose highly confidential information. Existing defenses against such attacks either lack formal privacy guarantees or incur substantial utility degradation. We observe that fine-tuning induces widespread probability shifts, yet preserving only a small subset of influential token-level deviations is sufficient; the remaining shifts can be aggressively smoothed with minimal impact on utility. Motivated by this insight, we propose SCP-$\Delta_r$, a Near Access Freeness (NAF)-based algorithm that operates on relative probabilities and explicitly smooths low-impact tokens using a base model. SCP-$\Delta_r$ achieves orders-of-magnitude better theoretical bounds than existing NAF based methods and provides strong empirical protection against TDE attacks with minimal performance loss.