🤖 AI Summary
To address the limited model utility caused by static privacy budget allocation in federated learning (FL), this paper proposes a time–client dual-dimensional adaptive differential privacy (DP) framework. Our method jointly and non-uniformly schedules the privacy budget across training rounds (temporal dimension) and clients (individual dimension)—a first in DP-FL literature. Specifically, it conserves budget early to ensure convergence and reduces noise magnitude later to enhance model accuracy, thereby breaking the conventional assumption of uniform budget consumption. Theoretical analysis establishes a tighter utility bound compared to classical DP-FL baselines. Empirical evaluation on standard benchmarks demonstrates substantial improvements in the privacy–utility trade-off: under stringent privacy budgets (small ε), classification accuracy increases by up to 3.2%—particularly beneficial for high-privacy-sensitivity applications.
📝 Abstract
Federated learning (FL) with differential privacy (DP) provides a framework for collaborative machine learning, enabling clients to train a shared model while adhering to strict privacy constraints. The framework allows each client to have an individual privacy guarantee, e.g., by adding different amounts of noise to each client's model updates. One underlying assumption is that all clients spend their privacy budgets uniformly over time (learning rounds). However, it has been shown in the literature that learning in early rounds typically focuses on more coarse-grained features that can be learned at lower signal-to-noise ratios while later rounds learn fine-grained features that benefit from higher signal-to-noise ratios. Building on this intuition, we propose a time-adaptive DP-FL framework that expends the privacy budget non-uniformly across both time and clients. Our framework enables each client to save privacy budget in early rounds so as to be able to spend more in later rounds when additional accuracy is beneficial in learning more fine-grained features. We theoretically prove utility improvements in the case that clients with stricter privacy budgets spend budgets unevenly across rounds, compared to clients with more relaxed budgets, who have sufficient budgets to distribute their spend more evenly. Our practical experiments on standard benchmark datasets support our theoretical results and show that, in practice, our algorithms improve the privacy-utility trade-offs compared to baseline schemes.