🤖 AI Summary
This work addresses a critical limitation in conventional federated reinforcement learning, which typically aggregates policies or value functions via parameter averaging and thereby overlooks the multimodality and tail characteristics of reward distributions—leading to performance degradation in safety-critical scenarios. To overcome this, we propose FedDistRL, the first federated distributional reinforcement learning framework, which federates only the quantile-based distributional critic. We further introduce TR-FedDistRL, a novel method that constructs a distributional trust region around local Wasserstein barycenters using a shrink-squash operation, effectively preserving essential statistical properties of the return distribution. Experiments demonstrate that our approach substantially mitigates the mean-blurring effect, reduces safety risks such as accident rates, and alleviates both critic and policy drift, outperforming existing mean-focused and non-federated baselines.
📝 Abstract
Federated reinforcement learning typically aggregates value functions or policies by parameter averaging, which emphasizes expected return and can obscure statistical multimodality and tail behavior that matter in safety-critical settings. We formalize federated distributional reinforcement learning (FedDistRL), where clients parametrize quantile value function critics and federate these networks only. We also propose TR-FedDistRL, which builds a per client, risk-aware Wasserstein barycenter over a temporal buffer. This local barycenter provides a reference region to constrain the parameter averaged critic, ensuring necessary distributional information is not averaged out during the federation process. The distributional trust region is implemented as a shrink-squash step around this reference. Under fixed-policy evaluation, the feasibility map is nonexpansive and the update is contractive in a probe-set Wasserstein metric under evaluation. Experiments on a bandit, multi-agent gridworld, and continuous highway environment show reduced mean-smearing, improved safety proxies (catastrophe/accident rate), and lower critic/policy drift versus mean-oriented and non-federated baselines.