🤖 AI Summary
This paper addresses collaborative training of probabilistic generative classifiers in decentralized federated learning, where nodes train a globally consistent generative classifier without sharing raw data—exchanging only local sufficient statistics instead of model parameters. We propose a serverless distributed message-passing framework featuring a sufficient-statistics-based aggregation mechanism and generative-model-specific parameter update rules, enabling convergence under arbitrary network topologies and non-IID data distributions. We provide theoretical guarantees of global convergence for the proposed algorithm. Empirical evaluations across diverse network structures, data heterogeneity levels, and system scales demonstrate that our method achieves performance on par with centralized training and significantly outperforms existing decentralized federated learning approaches.
📝 Abstract
Federated learning is a paradigm of increasing relevance in real world applications, aimed at building a global model across a network of heterogeneous users without requiring the sharing of private data. We focus on model learning over decentralized architectures, where users collaborate directly to update the global model without relying on a central server. In this context, the current paper proposes a novel approach to collaboratively learn probabilistic generative classifiers with a parametric form. The framework is composed by a communication network over a set of local nodes, each of one having its own local data, and a local updating rule. The proposal involves sharing local statistics with neighboring nodes, where each node aggregates the neighbors' information and iteratively learns its own local classifier, which progressively converges to a global model. Extensive experiments demonstrate that the algorithm consistently converges to a globally competitive model across a wide range of network topologies, network sizes, local dataset sizes, and extreme non-i.i.d. data distributions.