Empirical Analysis of Privacy-Fairness-Accuracy Trade-offs in Federated Learning: A Step Towards Responsible AI

📅 2025-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Federated learning (FL) faces an inherent trade-off between privacy preservation (PP) and model fairness, particularly under heterogeneous (non-IID) and homogeneous (IID) data distributions. This work systematically evaluates three representative FL algorithms—q-FedAvg, q-MAML, and Ditto—under three privacy-enhancing mechanisms: differential privacy (DP), homomorphic encryption (HE), and secure multi-party computation (SMC). We empirically assess their joint impact on accuracy, fairness (measured via group performance disparity), and privacy guarantees. Our key findings reveal that DP significantly exacerbates inter-group performance gaps; HE and SMC mitigate fairness degradation but incur substantial computational overhead; and the nature of the privacy–fairness–utility trade-off is highly sensitive to both data distribution and privacy-mechanism pairing. Based on these insights, we propose a design framework for responsible AI in FL, offering principled guidelines and practical pathways toward robust, privacy-preserving, and fair federated learning.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) enables collaborative machine learning while preserving data privacy but struggles to balance privacy preservation (PP) and fairness. Techniques like Differential Privacy (DP), Homomorphic Encryption (HE), and Secure Multi-Party Computation (SMC) protect sensitive data but introduce trade-offs. DP enhances privacy but can disproportionately impact underrepresented groups, while HE and SMC mitigate fairness concerns at the cost of computational overhead. This work explores the privacy-fairness trade-offs in FL under IID (Independent and Identically Distributed) and non-IID data distributions, benchmarking q-FedAvg, q-MAML, and Ditto on diverse datasets. Our findings highlight context-dependent trade-offs and offer guidelines for designing FL systems that uphold responsible AI principles, ensuring fairness, privacy, and equitable real-world applications.
Problem

Research questions and friction points this paper is trying to address.

Balancing privacy and fairness in Federated Learning systems.
Exploring trade-offs between privacy techniques and computational efficiency.
Evaluating fairness impacts of privacy methods on underrepresented groups.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated Learning balances privacy and fairness.
Differential Privacy, Homomorphic Encryption, Secure Multi-Party Computation used.
Benchmarks q-FedAvg, q-MAML, Ditto on diverse datasets.
🔎 Similar Papers
No similar papers found.