🤖 AI Summary
In resource-constrained real-time IoT sensing scenarios, Federated Continual Learning (FCL) suffers from poor generalization and severe overfitting due to non-IID device data, alongside high computational/communication overhead, excessive energy consumption, and unacceptable response latency.
Method: This work introduces, for the first time, client-side data sampling as a learnable control variable, proposing a novel sampling-driven federated learning paradigm. We design SCFL—an online reinforcement learning algorithm based on Soft Actor-Critic—to jointly optimize sampling policies and model training. A lightweight distributed training architecture is further developed to enable millisecond-level environmental adaptation.
Results: Experiments demonstrate that the global model achieves a 12.7% improvement in generalization accuracy across heterogeneous devices, reduces average communication and computation energy consumption by 34.5%, and significantly mitigates overfitting—while simultaneously satisfying stringent requirements on real-time responsiveness, prediction accuracy, and energy efficiency.
📝 Abstract
In the domain of Federated Learning (FL) systems, recent cutting-edge methods heavily rely on ideal conditions convergence analysis. Specifically, these approaches assume that the training datasets on IoT devices possess similar attributes to the global data distribution. However, this approach fails to capture the full spectrum of data characteristics in real-time sensing FL systems. In order to overcome this limitation, we suggest a new approach system specifically designed for IoT networks with real-time sensing capabilities. Our approach takes into account the generalization gap due to the user's data sampling process. By effectively controlling this sampling process, we can mitigate the overfitting issue and improve overall accuracy. In particular, We first formulate an optimization problem that harnesses the sampling process to concurrently reduce overfitting while maximizing accuracy. In pursuit of this objective, our surrogate optimization problem is adept at handling energy efficiency while optimizing the accuracy with high generalization. To solve the optimization problem with high complexity, we introduce an online reinforcement learning algorithm, named Sample-driven Control for Federated Learning (SCFL) built on the Soft Actor-Critic (A2C) framework. This enables the agent to dynamically adapt and find the global optima even in changing environments. By leveraging the capabilities of SCFL, our system offers a promising solution for resource allocation in FL systems with real-time sensing capabilities.