🤖 AI Summary
This work addresses the challenges posed by unreliable communication and differential privacy in hierarchical federated learning, where partial client participation and injected privacy noise degrade model accuracy and robustness. To this end, we propose H-SecCoGC, a novel framework that, for the first time, integrates error-correcting codes into hierarchical secure aggregation to enable structured gradient aggregation under arbitrary levels of differential privacy guarantees. By mitigating information loss caused by communication failures, H-SecCoGC significantly enhances model convergence efficiency and robustness. Theoretical analysis and extensive experiments demonstrate that our approach simultaneously achieves strong privacy preservation, high accuracy, and stable aggregation performance in unreliable communication environments, outperforming existing methods by a significant margin.
📝 Abstract
Hierarchical federated learning (HFL) has emerged as an effective paradigm to enhance link quality between clients and the server. However, ensuring model accuracy while preserving privacy under unreliable communication remains a key challenge in HFL, as the coordination among privacy noise can be randomly disrupted. To address this limitation, we propose a robust hierarchical secure aggregation scheme, termed H-SecCoGC, which integrates coding strategies to enforce structured aggregation. The proposed scheme not only ensures accurate global model construction under varying levels of privacy, but also avoids the partial participation issue, thereby significantly improving robustness, privacy preservation, and learning efficiency. Both theoretical analyses and experimental results demonstrate the superiority of our scheme under unreliable communication across arbitrarily strong privacy guarantees