Federated Learning-Distillation Alternation for Resource-Constrained IoT

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of energy constraints in energy-harvesting Internet-of-Things (EH-IoT) devices, severe wireless channel interference, and the high communication overhead and slow convergence of large-model federated learning (FL), this paper proposes the Federated Learning with Dynamic Alternation (FLDA) mechanism. FLDA introduces a novel framework that adaptively alternates between FL and knowledge distillation (KD) across training rounds, integrating energy-harvesting modeling, multi-channel slotted ALOHA access analysis, and interference-robust training strategies to jointly optimize accuracy and efficiency under dynamic resource conditions. Experimental results demonstrate that, compared to standard FL, FLDA achieves equivalent target accuracy while reducing energy consumption by 98% and significantly accelerating convergence. Moreover, FLDA exhibits superior robustness against channel fading and background interference.

Technology Category

Application Category

📝 Abstract
Federated learning (FL) faces significant challenges in Internet of Things (IoT) networks due to device limitations in energy and communication resources, especially when considering the large size of FL models. From an energy perspective, the challenge is aggravated if devices rely on energy harvesting (EH), as energy availability can vary significantly over time, influencing the average number of participating users in each iteration. Additionally, the transmission of large model updates is more susceptible to interference from uncorrelated background traffic in shared wireless environments. As an alternative, federated distillation (FD) reduces communication overhead and energy consumption by transmitting local model outputs, which are typically much smaller than the entire model used in FL. However, this comes at the cost of reduced model accuracy. Therefore, in this paper, we propose FL-distillation alternation (FLDA). In FLDA, devices alternate between FD and FL phases, balancing model information with lower communication overhead and energy consumption per iteration. We consider a multichannel slotted-ALOHA EH-IoT network subject to background traffic/interference. In such a scenario, FLDA demonstrates higher model accuracy than both FL and FD, and achieves faster convergence than FL. Moreover, FLDA achieves target accuracies saving up to 98% in energy consumption, while also being less sensitive to interference, both relative to FL.
Problem

Research questions and friction points this paper is trying to address.

Balancing model accuracy and energy consumption in IoT networks
Reducing communication overhead in federated learning for resource-constrained devices
Mitigating interference impact on model updates in shared wireless environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Alternates between federated learning and distillation
Reduces communication overhead and energy consumption
Improves model accuracy and convergence speed
🔎 Similar Papers
No similar papers found.
R
Rafael Valente da Silva
Department of Electrical and Electronics Engineering of the Federal University of Santa Catarina, Florianópolis, Brazil
O
Onel L. Alcaraz L'opez
Centre for Wireless Communications (CWC), University of Oulu, 90570 Oulu, Finland
Richard Demo Souza
Richard Demo Souza
Department of Electrical and Electronics Engineering - Federal University of Santa Catarina (UFSC)
Wireless CommunicationsSignal Processing