🤖 AI Summary
Traditional federated learning (FL) suffers from single-point failure, limited personalization, and poor robustness to data heterogeneity and client dropouts due to its centralized star topology. To address these limitations, this paper proposes LIGHTYEAR, a decentralized FL framework. It replaces the central server with a peer-to-peer topology, introduces a semantic consistency scoring mechanism—evaluating model update credibility in functional space using local validation sets—and employs a regularization-based aggregation strategy for adaptive, trustworthy selection and fusion of local updates. Experiments across heterogeneous and adversarial settings demonstrate that LIGHTYEAR significantly improves individual client performance over state-of-the-art centralized and decentralized baselines. The framework achieves superior robustness against client failures and data distribution shifts, enables effective personalization without additional fine-tuning, and enhances interpretability through transparent, locally grounded update evaluation.
📝 Abstract
Federated learning (FL) enables collaborative model training across distributed clients while preserving data privacy by keeping data local. Traditional FL approaches rely on a centralized, star-shaped topology, where a central server aggregates model updates from clients. However, this architecture introduces several limitations, including a single point of failure, limited personalization, and poor robustness to distribution shifts or vulnerability to malfunctioning clients. Moreover, update selection in centralized FL often relies on low-level parameter differences, which can be unreliable when client data is not independent and identically distributed, and offer clients little control. In this work, we propose a decentralized, peer-to-peer (P2P) FL framework. It leverages the flexibility of the P2P topology to enable each client to identify and aggregate a personalized set of trustworthy and beneficial updates.This framework is the Local Inference Guided Aggregation for Heterogeneous Training Environments to Yield Enhancement Through Agreement and Regularization (LIGHTYEAR). Central to our method is an agreement score, computed on a local validation set, which quantifies the semantic alignment of incoming updates in the function space with respect to the clients reference model. Each client uses this score to select a tailored subset of updates and performs aggregation with a regularization term that further stabilizes the training. Our empirical evaluation across two datasets shows that the proposed approach consistently outperforms both centralized baselines and existing P2P methods in terms of client-level performance, particularly under adversarial and heterogeneous conditions.