FOVA: Offline Federated Reinforcement Learning with Mixed-Quality Data

📅 2025-12-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In offline federated reinforcement learning, heterogeneous behavioral policies of varying quality degrade global policy performance. To address this, we propose FOVA—a novel framework that (i) employs a client-local voting mechanism to identify high-return actions, thereby mitigating interference from low-quality behavioral data; (ii) integrates advantage-weighted regression (AWR) to align local and global optimization objectives; and (iii) provides the first theoretical guarantee that the learned policy strictly dominates *any* behavioral policy under mild assumptions. FOVA synergizes federated architecture, distributed policy aggregation, and action-level quality filtering. Extensive experiments on multiple benchmarks demonstrate that FOVA significantly outperforms state-of-the-art methods in both policy performance and sample efficiency, while exhibiting strong robustness to behavioral policy heterogeneity and stable convergence across diverse non-IID settings.

Technology Category

Application Category

📝 Abstract
Offline Federated Reinforcement Learning (FRL), a marriage of federated learning and offline reinforcement learning, has attracted increasing interest recently. Albeit with some advancement, we find that the performance of most existing offline FRL methods drops dramatically when provided with mixed-quality data, that is, the logging behaviors (offline data) are collected by policies with varying qualities across clients. To overcome this limitation, this paper introduces a new vote-based offline FRL framework, named FOVA. It exploits a emph{vote mechanism} to identify high-return actions during local policy evaluation, alleviating the negative effect of low-quality behaviors from diverse local learning policies. Besides, building on advantage-weighted regression (AWR), we construct consistent local and global training objectives, significantly enhancing the efficiency and stability of FOVA. Further, we conduct an extensive theoretical analysis and rigorously show that the policy learned by FOVA enjoys strict policy improvement over the behavioral policy. Extensive experiments corroborate the significant performance gains of our proposed algorithm over existing baselines on widely used benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Addresses performance drop in offline FRL with mixed-quality data
Introduces a vote mechanism to identify high-return actions locally
Ensures policy improvement over behavioral policies via consistent objectives
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vote mechanism identifies high-return actions locally
Consistent local-global objectives via advantage-weighted regression
Theoretical guarantee of strict policy improvement
🔎 Similar Papers
No similar papers found.