Secure Cooperative Gradient Coding: Optimality, Reliability, and Global Privacy

📅 2025-07-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses two critical challenges in privacy-sensitive federated learning over unreliable communication channels: (i) failure of secure aggregation due to link failures, and (ii) convergence bias induced by stragglers. We propose SecCoGC—a novel framework that natively supports link-failure-resilient secure aggregation over the real field, integrating cooperative gradient coding with cryptographic aggregation to achieve arbitrarily strong global differential privacy guarantees. To further ensure equitable privacy protection across clients, we introduce Fair-SecCoGC, which enforces local mutual information constraints and employs adaptive noise allocation to achieve privacy fairness. Theoretical analysis establishes robust convergence under communication heterogeneity and adversarial dropouts. Extensive experiments demonstrate that our approach improves model accuracy by 20%–70% over state-of-the-art baselines across diverse packet-loss and latency scenarios, while consistently converging to a neighborhood of the optimal solution.

Technology Category

Application Category

📝 Abstract
This paper studies privacy-sensitive federated learning (FL) with unreliable communication, focusing on secure aggregation and straggler mitigation. While secure aggregation cryptographically reconstructs the global model without exposing client updates, random link failures disrupt its key coordination, degrading model accuracy. Moreover, unreliable communication can lead to objective inconsistency, causing the global model to converge to arbitrary, sub-optimal points far from the intended optimum. This paper proposes Secure Cooperative Gradient Coding (SecCoGC), a practical solution that achieves secure aggregation with arbitrarily strong privacy guarantees and robust straggler mitigation under unreliable communication. SecCoGC operates natively in the real field, making it directly applicable to practical deployments. To ensure equitable privacy protection across clients, we further introduce Fair-SecCoGC, an extension that enforces fairness in the level of privacy offered to all users. To conclude, this paper formally formulates the problem of secure aggregation in the real field and presents both general and computationally efficient key construction methods. Moreover, it provides a comprehensive privacy analysis under Local Mutual Information Privacy (LMIP) and Local Differential Privacy (LDP) across all protocol layers. Robustness and convergence properties are also rigorously analyzed. Finally, extensive simulations are performed across diverse network conditions and benchmark datasets to validate the effectiveness of the proposed methods. The results show that SecCoGC achieves strong robustness to unreliable communication under arbitrarily strong privacy guarantees. It outperforms existing privacy-preserving methods with performance gains of up to 20%-70%.
Problem

Research questions and friction points this paper is trying to address.

Secure aggregation in federated learning with unreliable communication
Mitigating stragglers while ensuring global privacy guarantees
Achieving objective consistency under random link failures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Secure Cooperative Gradient Coding for privacy
Fair-SecCoGC ensures equitable privacy protection
Real field operation for practical deployment
🔎 Similar Papers
No similar papers found.