π€ AI Summary
This paper addresses the challenges of distributional shift robustness and sample efficiency in multi-agent online Markov games under sim-to-real settings. Methodologically, it introduces a minimum eigenvalue condition to mitigate non-stationarity induced by adaptive opponents and designs a multi-agent-specific exploration reward, yielding the DR-CCE-LSI algorithmβa distributionally robust reinforcement learning framework for multi-agent linear function approximation based on least-squares value iteration. Theoretically, the algorithm achieves a regret bound of $O(dHmin{H,1/min_isigma_i}sqrt{K})$, attaining minimax-optimal sample complexity in feature dimension $d$; it efficiently converges to an $varepsilon$-approximate distributionally robust coarse correlated equilibrium (DR-CCE). Empirical evaluation confirms its robustness against dynamic environmental distribution shifts and superior sample efficiency.
π Abstract
The sim-to-real gap, where agents trained in a simulator face significant performance degradation during testing, is a fundamental challenge in reinforcement learning. Extansive works adopt the framework of distributionally robust RL, to learn a policy that acts robustly under worst case environment shift. Within this framework, our objective is to devise algorithms that are sample efficient with interactive data collection and large state spaces. By assuming d-rectangularity of environment dynamic shift, we identify a fundamental hardness result for learning in online Markov game, and address it by adopting minimum value assumption. Then, a novel least square value iteration type algorithm, DR-CCE-LSI, with exploration bonus devised specifically for multiple agents, is proposed to find an episilon-approximate robust Coarse Correlated Equilibrium(CCE). To obtain sample efficient learning, we find that: when the feature mapping function satisfies certain properties, our algorithm, DR-CCE-LSI, is able to achieve epsilon-approximate CCE with a regret bound of O{dHmin{H,1/min{sigma_i}}sqrt{K}}, where K is the number of interacting episodes, H is the horizon length, d is the feature dimension, and simga_i represents the uncertainty level of player i. Our work introduces the first sample-efficient algorithm for this setting, matches the best result so far in single agent setting, and achieves minimax optimalsample complexity in terms of the feature dimension d. Meanwhile, we also conduct simulation study to validate the efficacy of our algorithm in learning a robust equilibrium.