π€ AI Summary
This work investigates the privacy risks of LoRA-finetuned language models under membership inference attacks (MIAs), revealing that pre-trained models can amplify information leakageβa factor overlooked by existing MIA methods. To address this gap, we propose LoRA-Leak, the first systematic evaluation framework tailored to the LoRA fine-tuning paradigm. LoRA-Leak is the first to empirically uncover and quantify the amplification effect of pre-trained models on MIAs, and introduces five novel attack variants explicitly leveraging pre-trained model references. We conduct comprehensive evaluations across three mainstream LLMs and multiple NLP tasks, integrating ten baseline and five enhanced attacks, with metrics including AUC. We also assess four defense strategies. Results show that LoRA models remain vulnerable to MIAs (up to AUC = 0.775); among defenses, only dropout and selective layer exclusion achieve a favorable trade-off between privacy protection and task performance preservation.
π Abstract
Language Models (LMs) typically adhere to a "pre-training and fine-tuning" paradigm, where a universal pre-trained model can be fine-tuned to cater to various specialized domains. Low-Rank Adaptation (LoRA) has gained the most widespread use in LM fine-tuning due to its lightweight computational cost and remarkable performance. Because the proportion of parameters tuned by LoRA is relatively small, there might be a misleading impression that the LoRA fine-tuning data is invulnerable to Membership Inference Attacks (MIAs). However, we identify that utilizing the pre-trained model can induce more information leakage, which is neglected by existing MIAs. Therefore, we introduce LoRA-Leak, a holistic evaluation framework for MIAs against the fine-tuning datasets of LMs. LoRA-Leak incorporates fifteen membership inference attacks, including ten existing MIAs, and five improved MIAs that leverage the pre-trained model as a reference. In experiments, we apply LoRA-Leak to three advanced LMs across three popular natural language processing tasks, demonstrating that LoRA-based fine-tuned LMs are still vulnerable to MIAs (e.g., 0.775 AUC under conservative fine-tuning settings). We also applied LoRA-Leak to different fine-tuning settings to understand the resulting privacy risks. We further explore four defenses and find that only dropout and excluding specific LM layers during fine-tuning effectively mitigate MIA risks while maintaining utility. We highlight that under the "pre-training and fine-tuning" paradigm, the existence of the pre-trained model makes MIA a more severe risk for LoRA-based LMs. We hope that our findings can provide guidance on data privacy protection for specialized LM providers.