🤖 AI Summary
To address the high communication overhead per iteration and collaboration failure caused by intermittent connectivity among edge devices in conventional diffusion learning, this paper proposes a lightweight collaborative diffusion learning framework integrating local multi-step updates with stochastic partial agent participation. It is the first work to incorporate both local update mechanisms and dynamic agent sampling into diffusion learning, ensuring mean-square stability and convergence under unstable network conditions. We derive a tight mean-square deviation (MSD) upper bound explicitly parameterized by network topology, agent sampling rate, and the number of local update steps. Theoretical analysis proves mean-square stability, while empirical evaluation demonstrates that the framework maintains over 98% of the original accuracy despite a 50% reduction in communication volume. This significantly enhances communication efficiency and robustness in edge intelligence scenarios.
📝 Abstract
Diffusion learning is a framework that endows edge devices with advanced intelligence. By processing and analyzing data locally and allowing each agent to communicate with its immediate neighbors, diffusion effectively protects the privacy of edge devices, enables real-time response, and reduces reliance on central servers. However, traditional diffusion learning relies on communication at every iteration, leading to communication overhead, especially with large learning models. Furthermore, the inherent volatility of edge devices, stemming from power outages or signal loss, poses challenges to reliable communication between neighboring agents. To mitigate these issues, this paper investigates an enhanced diffusion learning approach incorporating local updates and partial agent participation. Local updates will curtail communication frequency, while partial agent participation will allow for the inclusion of agents based on their availability. We prove that the resulting algorithm is stable in the mean-square error sense and provide a tight analysis of its Mean-Square-Deviation (MSD) performance. Various numerical experiments are conducted to illustrate our theoretical findings.