🤖 AI Summary
This work addresses the high communication overhead in distributed learning by proposing an event-triggered decentralized gossip framework that enables nodes to adaptively determine communication times based on local model deviation, without requiring central coordination. By integrating event-triggered control with gossip protocols, the method ensures ergodic convergence under non-convex objectives while substantially reducing communication frequency. Experimental results demonstrate a 71.61% reduction in cumulative peer-to-peer transmission volume compared to a full-communication baseline, with only a negligible degradation in model performance.
📝 Abstract
While distributed learning offers a new learning paradigm for distributed network with no central coordination, it is constrained by communication bottleneck between nodes.
We develop a new event-triggered gossip framework for distributed learning to reduce inter-node communication overhead. The framework introduces an adaptive communication control mechanism that enables each node to autonomously decide in a fully decentralized fashion when to exchange model information with its neighbors based on local model deviations. We analyze the ergodic convergence of the proposed framework under noconvex objectives and interpret the convergence guarantees under different triggering conditions. Simulation results show that the proposed framework achieves substantially lower communication overhead than the state-of-the-art distributed learning methods, reducing cumulative point-to-point transmissions by \textbf{71.61\%} with only a marginal performance loss, compared with the conventional full-communication baseline.