Initialisation and Network Effects in Decentralised Federated Learning

📅 2024-03-23
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
In decentralized federated learning, the entanglement between network topology and model initialization degrades training efficiency. Method: This paper first uncovers the synergistic interaction mechanism between these two factors and proposes a topology-aware initialization strategy based on graph eigenvector centrality—requiring no central coordinator. The method leverages node centrality distributions to guide neural network parameter initialization, enabling intrinsic alignment between training dynamics and network structure without incurring additional communication overhead. Contribution/Results: Experiments across diverse real-world and synthetic topologies demonstrate that the proposed approach reduces the average number of iterations by 40% compared to standard random initialization. It significantly enhances convergence stability and scalability, enabling efficient collaborative training at scale—up to one thousand nodes—while preserving decentralization and communication efficiency.

Technology Category

Application Category

📝 Abstract
Fully decentralised federated learning enables collaborative training of individual machine learning models on a distributed network of communicating devices while keeping the training data localised on each node. This approach avoids central coordination, enhances data privacy and eliminates the risk of a single point of failure. Our research highlights that the effectiveness of decentralised federated learning is significantly influenced by the network topology of connected devices and the learning models' initial conditions. We propose a strategy for uncoordinated initialisation of the artificial neural networks based on the distribution of eigenvector centralities of the underlying communication network, leading to a radically improved training efficiency. Additionally, our study explores the scaling behaviour and the choice of environmental parameters under our proposed initialisation strategy. This work paves the way for more efficient and scalable artificial neural network training in a distributed and uncoordinated environment, offering a deeper understanding of the intertwining roles of network structure and learning dynamics.
Problem

Research questions and friction points this paper is trying to address.

Investigates impact of network topology on decentralized federated learning
Proposes eigenvector-based initialization for improved training efficiency
Explores scaling behavior and parameter choices in decentralized learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uncoordinated initialization based on eigenvector centralities
Improved training efficiency in decentralized federated learning
Explored scaling behavior and environmental parameters
🔎 Similar Papers
No similar papers found.