🤖 AI Summary
This study investigates data memorization mechanisms and associated privacy risks in fine-tuning large language models (LLMs) for medical applications. Focusing on the privacy-sensitive PHEE pharmacotherapy event dataset, we propose an evaluation framework integrating membership inference attacks with prefix-guided generation tasks to systematically analyze the contribution of individual Transformer weight matrices to memorization. We find that the Value and Output projection matrices dominate memorization behavior; perplexity exhibits a strong positive correlation with memorization strength; and increasing LoRA rank yields only marginal increases in memorization. Crucially, we uncover a nonlinear trade-off between model performance gain and privacy leakage during fine-tuning—revealing that substantial accuracy improvements can occur with minimal additional memorization beyond a critical threshold. We further identify key controllable factors enabling low-memorization fine-tuning. These findings provide both interpretable theoretical foundations and empirically grounded guidance for secure, privacy-aware adaptation of medical LLMs.
📝 Abstract
This study investigates the mechanisms and factors influencing memorization in fine-tuned large language models (LLMs), with a focus on the medical domain due to its privacy-sensitive nature. We examine how different aspects of the fine-tuning process affect a model's propensity to memorize training data, using the PHEE dataset of pharmacovigilance events.
Our research employs two main approaches: a membership inference attack to detect memorized data, and a generation task with prompted prefixes to assess verbatim reproduction. We analyze the impact of adapting different weight matrices in the transformer architecture, the relationship between perplexity and memorization, and the effect of increasing the rank in low-rank adaptation (LoRA) fine-tuning.
Key findings include: (1) Value and Output matrices contribute more significantly to memorization compared to Query and Key matrices; (2) Lower perplexity in the fine-tuned model correlates with increased memorization; (3) Higher LoRA ranks lead to increased memorization, but with diminishing returns at higher ranks.
These results provide insights into the trade-offs between model performance and privacy risks in fine-tuned LLMs. Our findings have implications for developing more effective and responsible strategies for adapting large language models while managing data privacy concerns.