Multi-Objective Optimization for Privacy-Utility Balance in Differentially Private Federated Learning

📅 2025-03-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In differentially private federated learning, balancing privacy protection and model utility remains challenging. This paper proposes an adaptive gradient clipping mechanism based on multi-objective optimization. For the first time, it jointly models privacy loss—measured by Rényi differential privacy budget consumption—and model utility—quantified by global test accuracy—as an end-to-end differentiable objective, enabling dynamic, data- and iteration-aware adjustment of clipping norms and overcoming theoretical and empirical limitations of fixed-threshold clipping. The method integrates differential privacy, federated learning, and multi-objective optimization, supported by rigorous convergence analysis. Experiments on MNIST, Fashion-MNIST, and CIFAR-10 demonstrate that, under identical privacy budgets, the proposed approach significantly outperforms fixed-clipping baselines, achieving up to a 4.2% improvement in test accuracy.

Technology Category

Application Category

📝 Abstract
Federated learning (FL) enables collaborative model training across distributed clients without sharing raw data, making it a promising approach for privacy-preserving machine learning. However, ensuring differential privacy (DP) in FL presents challenges due to the trade-off between model utility and privacy protection. Clipping gradients before aggregation is a common strategy to limit privacy loss, but selecting an optimal clipping norm is non-trivial, as excessively high values compromise privacy, while overly restrictive clipping degrades model performance. In this work, we propose an adaptive clipping mechanism that dynamically adjusts the clipping norm using a multi-objective optimization framework. By integrating privacy and utility considerations into the optimization objective, our approach balances privacy preservation with model accuracy. We theoretically analyze the convergence properties of our method and demonstrate its effectiveness through extensive experiments on MNIST, Fashion-MNIST, and CIFAR-10 datasets. Our results show that adaptive clipping consistently outperforms fixed-clipping baselines, achieving improved accuracy under the same privacy constraints. This work highlights the potential of dynamic clipping strategies to enhance privacy-utility trade-offs in differentially private federated learning.
Problem

Research questions and friction points this paper is trying to address.

Balancing privacy and utility in federated learning with differential privacy
Optimizing gradient clipping norms to enhance model performance and privacy
Developing adaptive clipping for better privacy-utility trade-offs in FL
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive clipping mechanism for dynamic norm adjustment
Multi-objective optimization balancing privacy and utility
Theoretical convergence analysis with experimental validation
🔎 Similar Papers
No similar papers found.