🤖 AI Summary
This work addresses the challenge of learning a shared reward function in multi-agent systems under constraints imposed by environmental heterogeneity, privacy requirements, and limited communication bandwidth, which hinder direct data aggregation. The paper proposes the first federated inverse reinforcement learning framework based on optimal transport: each agent performs lightweight maximum-entropy IRL locally, and the resulting local reward functions are aggregated via Wasserstein barycenter computation. This approach preserves the geometric structure of individual rewards while enabling communication-efficient fusion, eschewing conventional parameter averaging. By doing so, it significantly enhances the fidelity of the recovered global reward function, simultaneously ensuring privacy preservation and adaptability to heterogeneous environments. Empirical results demonstrate superior performance over existing methods in both communication efficiency and generalization capability.
📝 Abstract
In robotics and multi-agent systems, fleets of autonomous agents often operate in subtly different environments while pursuing a common high-level objective. Directly pooling their data to learn a shared reward function is typically impractical due to differences in dynamics, privacy constraints, and limited communication bandwidth. This paper introduces an optimal transport-based approach to federated inverse reinforcement learning (IRL). Each client first performs lightweight Maximum Entropy IRL locally, adhering to its computational and privacy limitations. The resulting reward functions are then fused via a Wasserstein barycenter, which considers their underlying geometric structure. We further prove that this barycentric fusion yields a more faithful global reward estimate than conventional parameter averaging methods in federated learning. Overall, this work provides a principled and communication-efficient framework for deriving a shared reward that generalizes across heterogeneous agents and environments.