Federated Reinforcement Learning in Heterogeneous Environments

📅 2025-07-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses privacy preservation and robustness challenges in multi-agent federated reinforcement learning (FRL) under statistical heterogeneity. To this end, we propose FedRQ—a novel federated Q-learning framework. First, we formulate a robust objective function that jointly optimizes global policy coordination while preserving local trajectory privacy. Second, we design the FedRQ algorithm, tailored to environmental heterogeneity, and innovatively integrate expectile loss to extend federated Q-learning to continuous state spaces, enabling seamless compatibility with deep RL methods (e.g., DQN, SAC). Third, we conduct extensive experiments across diverse heterogeneous and perturbed environments. Results demonstrate that FedRQ consistently outperforms state-of-the-art federated RL approaches in both policy robustness and convergence stability, while maintaining strong privacy guarantees.

Technology Category

Application Category

📝 Abstract
We investigate a Federated Reinforcement Learning with Environment Heterogeneity (FRL-EH) framework, where local environments exhibit statistical heterogeneity. Within this framework, agents collaboratively learn a global policy by aggregating their collective experiences while preserving the privacy of their local trajectories. To better reflect real-world scenarios, we introduce a robust FRL-EH framework by presenting a novel global objective function. This function is specifically designed to optimize a global policy that ensures robust performance across heterogeneous local environments and their plausible perturbations. We propose a tabular FRL algorithm named FedRQ and theoretically prove its asymptotic convergence to an optimal policy for the global objective function. Furthermore, we extend FedRQ to environments with continuous state space through the use of expectile loss, addressing the key challenge of minimizing a value function over a continuous subset of the state space. This advancement facilitates the seamless integration of the principles of FedRQ with various Deep Neural Network (DNN)-based RL algorithms. Extensive empirical evaluations validate the effectiveness and robustness of our FRL algorithms across diverse heterogeneous environments, consistently achieving superior performance over the existing state-of-the-art FRL algorithms.
Problem

Research questions and friction points this paper is trying to address.

Federated Reinforcement Learning with environment heterogeneity
Optimizing global policy for robust performance across diverse environments
Extending algorithm to continuous state spaces using expectile loss
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated Reinforcement Learning with Environment Heterogeneity
Novel global objective function for robust performance
FedRQ algorithm with asymptotic convergence guarantee
🔎 Similar Papers
No similar papers found.
U
Ukjo Hwang
Department of Electronic Engineering, Hanyang University, Seoul, 04763, Korea
Songnam Hong
Songnam Hong
Hanyang University
Machine LearningInformation TheoryOptimization