Communication Optimization for Decentralized Learning atop Bandwidth-limited Edge Networks

📅 2025-04-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high communication overhead and slow convergence of decentralized federated learning (DFL) in bandwidth-constrained, multi-hop edge networks, this paper proposes the first joint optimization framework integrating overlay network communication mechanisms with hybrid matrix design, establishing a distributed learning communication model tailored to multi-hop bandwidth constraints. Leveraging communication topology modeling, convex optimization theory, and DSGD convergence analysis, we develop a real-network-driven joint optimization framework. Experimental evaluation on realistic edge network topologies and benchmark datasets demonstrates that our method reduces training time by over 80% compared to state-of-the-art approaches, without sacrificing model accuracy, while significantly improving both communication and computational efficiency.

Technology Category

Application Category

📝 Abstract
Decentralized federated learning (DFL) is a promising machine learning paradigm for bringing artificial intelligence (AI) capabilities to the network edge. Running DFL on top of edge networks, however, faces severe performance challenges due to the extensive parameter exchanges between agents. Most existing solutions for these challenges were based on simplistic communication models, which cannot capture the case of learning over a multi-hop bandwidth-limited network. In this work, we address this problem by jointly designing the communication scheme for the overlay network formed by the agents and the mixing matrix that controls the communication demands between the agents. By carefully analyzing the properties of our problem, we cast each design problem into a tractable optimization and develop an efficient algorithm with guaranteed performance. Our evaluations based on real topology and data show that the proposed algorithm can reduce the total training time by over $80%$ compared to the baseline without sacrificing accuracy, while significantly improving the computational efficiency over the state of the art.
Problem

Research questions and friction points this paper is trying to address.

Optimize communication in decentralized federated learning
Address bandwidth limits in multi-hop edge networks
Reduce training time without sacrificing accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Joint design of communication scheme and mixing matrix
Tractable optimization with guaranteed performance
Reduces training time by over 80%
🔎 Similar Papers
No similar papers found.
T
Tingyang Sun
Pennsylvania State University, University Park, PA, USA
T
Tuan Nguyen
Pennsylvania State University, University Park, PA, USA
Ting He
Ting He
Computer Science and Engineering, Pennsylvania State University
Communications and NetworkingStochastic OptimizationDistributed Learningperformance evaluation