Heterogeneous Federated Reinforcement Learning Using Wasserstein Barycenters

📅 2025-06-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In heterogeneous federated reinforcement learning, poor global policy generalization, severe distributional shift, and policy conflicts pose significant challenges. To address these issues, this paper proposes FedWB—a novel federated RL algorithm that enables clients to independently train DQNs in heterogeneous environments (e.g., CartPole with varying pole lengths) and introduces the Wasserstein barycenter for model aggregation. Unlike conventional gradient- or parameter-averaging methods (e.g., FedAvg), FedWB computes a geometric center in parameter space, enabling data-free, gradient-free global model fusion while preserving structural consistency across heterogeneous local policies. This approach mitigates performance degradation inherent in naive averaging under heterogeneity. Experiments on multi-pole-length CartPole demonstrate that the globally aggregated DQN via FedWB achieves over 95% stable control success across all heterogeneous settings—substantially outperforming FedAvg, FedProx, and other baselines in both robustness and generalization.

Technology Category

Application Category

📝 Abstract
In this paper, we first propose a novel algorithm for model fusion that leverages Wasserstein barycenters in training a global Deep Neural Network (DNN) in a distributed architecture. To this end, we divide the dataset into equal parts that are fed to"agents"who have identical deep neural networks and train only over the dataset fed to them (known as the local dataset). After some training iterations, we perform an aggregation step where we combine the weight parameters of all neural networks using Wasserstein barycenters. These steps form the proposed algorithm referred to as FedWB. Moreover, we leverage the processes created in the first part of the paper to develop an algorithm to tackle Heterogeneous Federated Reinforcement Learning (HFRL). Our test experiment is the CartPole toy problem, where we vary the lengths of the poles to create heterogeneous environments. We train a deep Q-Network (DQN) in each environment to learn to control each cart, while occasionally performing a global aggregation step to generalize the local models; the end outcome is a global DQN that functions across all environments.
Problem

Research questions and friction points this paper is trying to address.

Proposes FedWB for model fusion in distributed DNN training
Addresses heterogeneous federated reinforcement learning challenges
Tests global DQN generalization across varied CartPole environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Wasserstein barycenters for model fusion
Divides dataset equally among identical DNN agents
Applies FedWB algorithm to heterogeneous federated RL
🔎 Similar Papers
No similar papers found.
L
Luiz Manella Pereira
Knight Foundation School of Computing and Information Sciences, Florida International University
M. Hadi Amini
M. Hadi Amini
Associate Professor, Florida International University
Distributed LearningEdge AITrustworthy AICPS Security