Private Federated Learning using Preference-Optimized Synthetic Data

📅 2025-04-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low utility and challenging privacy–utility trade-off of differentially private (DP) synthetic data in federated learning, this paper proposes a novel paradigm for DP synthetic data generation leveraging client preference feedback. Specifically, it is the first to model private client feedback as preference rankings and integrate preference optimization algorithms—such as Direct Preference Optimization (DPO)—to guide large language models in generating high-quality DP text data. Furthermore, the authors introduce LargeFedBench, the first contamination-free federated text evaluation benchmark. Experimental results demonstrate that the proposed method significantly improves the utility of DP synthetic data on LargeFedBench, reducing the next-token prediction accuracy gap between private and non-private settings to 32%—surpassing the best prior approaches by 52% and 10%, respectively. This advancement effectively bridges the performance gap between strong privacy guarantees and model utility in federated learning.

Technology Category

Application Category

📝 Abstract
In practical settings, differentially private Federated learning (DP-FL) is the dominant method for training models from private, on-device client data. Recent work has suggested that DP-FL may be enhanced or outperformed by methods that use DP synthetic data (Wu et al., 2024; Hou et al., 2024). The primary algorithms for generating DP synthetic data for FL applications require careful prompt engineering based on public information and/or iterative private client feedback. Our key insight is that the private client feedback collected by prior DP synthetic data methods (Hou et al., 2024; Xie et al., 2024) can be viewed as a preference ranking. Our algorithm, Preference Optimization for Private Client Data (POPri) harnesses client feedback using preference optimization algorithms such as Direct Preference Optimization (DPO) to fine-tune LLMs to generate high-quality DP synthetic data. To evaluate POPri, we release LargeFedBench, a new federated text benchmark for uncontaminated LLM evaluations on federated client data. POPri substantially improves the utility of DP synthetic data relative to prior work on LargeFedBench datasets and an existing benchmark from Xie et al. (2024). POPri closes the gap between next-token prediction accuracy in the fully-private and non-private settings by up to 68%, compared to 52% for prior synthetic data methods, and 10% for state-of-the-art DP federated learning methods. The code and data are available at https://github.com/meiyuw/POPri.
Problem

Research questions and friction points this paper is trying to address.

Enhancing DP-FL with high-quality synthetic data
Optimizing client feedback via preference ranking
Closing accuracy gap in private settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses preference optimization for synthetic data
Fine-tunes LLMs with Direct Preference Optimization
Improves accuracy in private federated learning
🔎 Similar Papers
No similar papers found.