🤖 AI Summary
To address the challenges of energy constraints in energy-harvesting Internet-of-Things (EH-IoT) devices, severe wireless channel interference, and the high communication overhead and slow convergence of large-model federated learning (FL), this paper proposes the Federated Learning with Dynamic Alternation (FLDA) mechanism. FLDA introduces a novel framework that adaptively alternates between FL and knowledge distillation (KD) across training rounds, integrating energy-harvesting modeling, multi-channel slotted ALOHA access analysis, and interference-robust training strategies to jointly optimize accuracy and efficiency under dynamic resource conditions. Experimental results demonstrate that, compared to standard FL, FLDA achieves equivalent target accuracy while reducing energy consumption by 98% and significantly accelerating convergence. Moreover, FLDA exhibits superior robustness against channel fading and background interference.
📝 Abstract
Federated learning (FL) faces significant challenges in Internet of Things (IoT) networks due to device limitations in energy and communication resources, especially when considering the large size of FL models. From an energy perspective, the challenge is aggravated if devices rely on energy harvesting (EH), as energy availability can vary significantly over time, influencing the average number of participating users in each iteration. Additionally, the transmission of large model updates is more susceptible to interference from uncorrelated background traffic in shared wireless environments. As an alternative, federated distillation (FD) reduces communication overhead and energy consumption by transmitting local model outputs, which are typically much smaller than the entire model used in FL. However, this comes at the cost of reduced model accuracy. Therefore, in this paper, we propose FL-distillation alternation (FLDA). In FLDA, devices alternate between FD and FL phases, balancing model information with lower communication overhead and energy consumption per iteration. We consider a multichannel slotted-ALOHA EH-IoT network subject to background traffic/interference. In such a scenario, FLDA demonstrates higher model accuracy than both FL and FD, and achieves faster convergence than FL. Moreover, FLDA achieves target accuracies saving up to 98% in energy consumption, while also being less sensitive to interference, both relative to FL.