🤖 AI Summary
To address the high communication overhead and poor robustness of federated learning (FL) in wireless environments with dynamic topologies and large-scale node populations, this paper proposes a decentralized peer-to-peer (P2P) FL framework. The core method introduces a hierarchical group-based iterative aggregation mechanism: nodes are organized into multi-level groups, enabling local intra-group aggregation followed by progressive inter-group synchronization—reducing communication complexity from conventional O(N²) to O(N log N). Additionally, the framework incorporates a lightweight communication protocol, fault-tolerant model update mechanisms, and privacy-preserving computation interfaces. Experimental results demonstrate that the proposed framework significantly reduces communication load in highly dynamic networks, improves training efficiency and system scalability, and maintains model convergence and accuracy stability even under frequent node join/leave events.
📝 Abstract
The convergence of next-generation wireless systems and distributed Machine Learning (ML) demands Federated Learning (FL) methods that remain efficient and robust with wireless connected peers and under network churn. Peer-to-peer (P2P) FL removes the bottleneck of a central coordinator, but existing approaches suffer from excessive communication complexity, limiting their scalability in practice. We introduce MAR-FL, a novel P2P FL system that leverages iterative group-based aggregation to substantially reduce communication overhead while retaining resilience to churn. MAR-FL achieves communication costs that scale as O(N log N), contrasting with the O(N^2) complexity of previously existing baselines, and thereby maintains effectiveness especially as the number of peers in an aggregation round grows. The system is robust towards unreliable FL clients and can integrate private computing.