🤖 AI Summary
This work proposes Mosaic Learning, a novel framework for decentralized learning that addresses efficiency and performance bottlenecks caused by redundant model communication and parameter correlations. By leveraging intrinsic parameter correlations, the method decomposes the model into segments that propagate independently across the network, thereby reducing communication redundancy and enhancing information diversity while maintaining constant communication overhead. Theoretical analysis demonstrates that the framework achieves optimal worst-case convergence rates and improves iterative contractiveness by reducing the system’s maximum eigenvalue. Extensive experiments across four tasks validate its effectiveness, with node-level test accuracy improvements of up to 12 percentage points, substantially outperforming the current state-of-the-art Epidemic Learning approach.
📝 Abstract
Decentralized learning (DL) enables collaborative machine learning (ML) without a central server, making it suitable for settings where training data cannot be centrally hosted. We introduce Mosaic Learning, a DL framework that decomposes models into fragments and disseminates them independently across the network. Fragmentation reduces redundant communication across correlated parameters and enables more diverse information propagation without increasing communication cost. We theoretically show that Mosaic Learning (i) shows state-of-the-art worst-case convergence rate, and (ii) leverages parameter correlation in an ML model, improving contraction by reducing the highest eigenvalue of a simplified system. We empirically evaluate Mosaic Learning on four learning tasks and observe up to 12 percentage points higher node-level test accuracy compared to epidemic learning (EL), a state-of-the-art baseline. In summary, Mosaic Learning improves DL performance without sacrificing its utility or efficiency, and positions itself as a new DL standard.