🤖 AI Summary
This work addresses the challenges of policy suboptimality and policy corruption in federated offline reinforcement learning, which arise from low-quality and heterogeneous client data. To mitigate these issues, the authors propose the FORLER framework, which employs server-side Q-function ensembling to suppress policy corruption and integrates device-side zeroth-order optimization with a tailored regularization term for policy correction. Additionally, a δ-periodic update mechanism is introduced to reduce communication and computational overhead. Theoretical analysis provides guarantees for safe policy improvement, and extensive experiments demonstrate that FORLER consistently outperforms strong existing baselines across diverse data quality and heterogeneity settings, exhibiting robustness, efficiency, and strong privacy preservation.
📝 Abstract
In Internet-of-Things systems, federated learning has advanced online reinforcement learning (RL) by enabling parallel policy training without sharing raw data. However, interacting with real environments online can be risky and costly, motivating offline federated RL (FRL), where local devices learn from fixed datasets. Despite its promise, offline FRL may break down under low-quality, heterogeneous data. Offline RL tends to get stuck in local optima, and in FRL, one device's suboptimal policy can degrade the aggregated model, i.e., policy pollution. We present FORLER, combining Q-ensemble aggregation on the server with actor rectification on devices. The server robustly merges device Q-functions to curb policy pollution and shift heavy computation off resource-constrained hardware without compromising privacy. Locally, actor rectification enriches policy gradients via a zeroth-order search for high-Q actions plus a bespoke regularizer that nudges the policy toward them. A $\delta$-periodic strategy further reduces local computation. We theoretically provide safe policy improvement performance guarantees. Extensive experiments show FORLER consistently outperforms strong baselines under varying data quality and heterogeneity.