🤖 AI Summary
In federated learning (FL), differential privacy (DP) provides formal privacy guarantees but suffers from utility loss due to conservative worst-case assumptions; moreover, existing DP mechanisms exhibit variable memory consumption on resource-constrained devices, hindering practical deployment. This paper proposes the first fixed-memory DP mechanism for FL, integrating interactive privacy accounting with a human-in-the-loop adaptive tuning framework to ensure stable, bounded memory usage across all training rounds on edge devices. Our approach incorporates BERT fine-tuning, evaluation on the GLUE benchmark, and heterogeneous data partitioning. At privacy budgets of ε = 10 and ε = 6, accuracy drops by only 1.33% and 1.9%, respectively—substantially outperforming variable-memory baselines. Key contributions include: (1) resolving the unbounded memory overhead of DP in FL; (2) enabling dynamic, adaptive privacy–utility trade-offs; and (3) demonstrating a lightweight, scalable pathway toward production-ready private FL.
📝 Abstract
Federated learning (FL) enhances privacy by keeping user data on local devices. However, emerging attacks have demonstrated that the updates shared by users during training can reveal significant information about their data. This has greatly thwart the adoption of FL methods for training robust AI models in sensitive applications. Differential Privacy (DP) is considered the gold standard for safeguarding user data. However, DP guarantees are highly conservative, providing worst-case privacy guarantees. This can result in overestimating privacy needs, which may compromise the model's accuracy. Additionally, interpretations of these privacy guarantees have proven to be challenging in different contexts. This is further exacerbated when other factors, such as the number of training iterations, data distribution, and specific application requirements, can add further complexity to this problem. In this work, we proposed a framework that integrates a human entity as a privacy practitioner to determine an optimal trade-off between the model's privacy and utility. Our framework is the first to address the variable memory requirement of existing DP methods in FL settings, where resource-limited devices (e.g., cell phones) can participate. To support such settings, we adopt a recent DP method with fixed memory usage to ensure scalable private FL. We evaluated our proposed framework by fine-tuning a BERT-based LLM model using the GLUE dataset (a common approach in literature), leveraging the new accountant, and employing diverse data partitioning strategies to mimic real-world conditions. As a result, we achieved stable memory usage, with an average accuracy reduction of 1.33% for $epsilon = 10$ and 1.9% for $epsilon = 6$, when compared to the state-of-the-art DP accountant which does not support fixed memory usage.