Generated Data with Fake Privacy: Hidden Dangers of Fine-tuning Large Language Models on Generated Data

πŸ“… 2024-09-12
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study challenges the common assumption that synthetic data generated by large language models (LLMs) is inherently privacy-safe. It systematically investigates privacy leakage risks when LLM-generated data is used for supervised fine-tuning and self-instruct fine-tuning. Method: Leveraging Pythia and OPT model families, the authors quantify privacy risks via personally identifiable information (PII) extraction detection and membership inference attacks (MIAs). Contribution/Results: The work provides the first empirical evidence that LLM-generated data poses privacy risks equivalent to real dataβ€”not serving as a privacy-enhancing technique but rather acting as a novel risk vector. Experiments show that fine-tuning Pythia increases PII extraction success rates by over 20%; self-instruct fine-tuning of Pythia-6.9B raises MIA ROC-AUC by more than 40%. These findings demonstrate that fine-tuning on synthetic data significantly amplifies privacy leakage, offering critical warnings and establishing foundational benchmarks for the secure use of synthetic data in LLM development.

Technology Category

Application Category

πŸ“ Abstract
Large language models (LLMs) have demonstrated significant success in various domain-specific tasks, with their performance often improving substantially after fine-tuning. However, fine-tuning with real-world data introduces privacy risks. To mitigate these risks, developers increasingly rely on synthetic data generation as an alternative to using real data, as data generated by traditional models is believed to be different from real-world data. However, with the advanced capabilities of LLMs, the distinction between real data and data generated by these models has become nearly indistinguishable. This convergence introduces similar privacy risks for generated data to those associated with real data. Our study investigates whether fine-tuning with LLM-generated data truly enhances privacy or introduces additional privacy risks by examining the structural characteristics of data generated by LLMs, focusing on two primary fine-tuning approaches: supervised fine-tuning (SFT) with unstructured (plain-text) generated data and self-instruct tuning. In the scenario of SFT, the data is put into a particular instruction tuning format used by previous studies. We use Personal Information Identifier (PII) leakage and Membership Inference Attacks (MIAs) on the Pythia Model Suite and Open Pre-trained Transformer (OPT) to measure privacy risks. Notably, after fine-tuning with unstructured generated data, the rate of successful PII extractions for Pythia increased by over 20%, highlighting the potential privacy implications of such approaches. Furthermore, the ROC-AUC score of MIAs for Pythia-6.9b, the second biggest model of the suite, increases over 40% after self-instruct tuning. Our results indicate the potential privacy risks associated with fine-tuning LLMs using generated data, underscoring the need for careful consideration of privacy safeguards in such approaches.
Problem

Research questions and friction points this paper is trying to address.

Privacy Leakage
Large Language Models
Synthetic Data Training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Privacy Leakage
Large Language Models
Data Synthesis Methods
πŸ”Ž Similar Papers
No similar papers found.