MAR-FL: A Communication Efficient Peer-to-Peer Federated Learning System

📅 2025-12-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high communication overhead and poor robustness of federated learning (FL) in wireless environments with dynamic topologies and large-scale node populations, this paper proposes a decentralized peer-to-peer (P2P) FL framework. The core method introduces a hierarchical group-based iterative aggregation mechanism: nodes are organized into multi-level groups, enabling local intra-group aggregation followed by progressive inter-group synchronization—reducing communication complexity from conventional O(N²) to O(N log N). Additionally, the framework incorporates a lightweight communication protocol, fault-tolerant model update mechanisms, and privacy-preserving computation interfaces. Experimental results demonstrate that the proposed framework significantly reduces communication load in highly dynamic networks, improves training efficiency and system scalability, and maintains model convergence and accuracy stability even under frequent node join/leave events.

Technology Category

Application Category

📝 Abstract
The convergence of next-generation wireless systems and distributed Machine Learning (ML) demands Federated Learning (FL) methods that remain efficient and robust with wireless connected peers and under network churn. Peer-to-peer (P2P) FL removes the bottleneck of a central coordinator, but existing approaches suffer from excessive communication complexity, limiting their scalability in practice. We introduce MAR-FL, a novel P2P FL system that leverages iterative group-based aggregation to substantially reduce communication overhead while retaining resilience to churn. MAR-FL achieves communication costs that scale as O(N log N), contrasting with the O(N^2) complexity of previously existing baselines, and thereby maintains effectiveness especially as the number of peers in an aggregation round grows. The system is robust towards unreliable FL clients and can integrate private computing.
Problem

Research questions and friction points this paper is trying to address.

Reduces communication overhead in peer-to-peer federated learning
Enhances scalability by lowering complexity from O(N^2) to O(N log N)
Maintains resilience to network churn and unreliable clients
Innovation

Methods, ideas, or system contributions that make the work stand out.

Peer-to-peer federated learning system
Iterative group-based aggregation reduces overhead
Communication scales O(N log N) not O(N^2)
🔎 Similar Papers
No similar papers found.