Distributionally Robust Online Markov Game with Linear Function Approximation

πŸ“… 2025-11-11
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This paper addresses the challenges of distributional shift robustness and sample efficiency in multi-agent online Markov games under sim-to-real settings. Methodologically, it introduces a minimum eigenvalue condition to mitigate non-stationarity induced by adaptive opponents and designs a multi-agent-specific exploration reward, yielding the DR-CCE-LSI algorithmβ€”a distributionally robust reinforcement learning framework for multi-agent linear function approximation based on least-squares value iteration. Theoretically, the algorithm achieves a regret bound of $O(dHmin{H,1/min_isigma_i}sqrt{K})$, attaining minimax-optimal sample complexity in feature dimension $d$; it efficiently converges to an $varepsilon$-approximate distributionally robust coarse correlated equilibrium (DR-CCE). Empirical evaluation confirms its robustness against dynamic environmental distribution shifts and superior sample efficiency.

Technology Category

Application Category

πŸ“ Abstract
The sim-to-real gap, where agents trained in a simulator face significant performance degradation during testing, is a fundamental challenge in reinforcement learning. Extansive works adopt the framework of distributionally robust RL, to learn a policy that acts robustly under worst case environment shift. Within this framework, our objective is to devise algorithms that are sample efficient with interactive data collection and large state spaces. By assuming d-rectangularity of environment dynamic shift, we identify a fundamental hardness result for learning in online Markov game, and address it by adopting minimum value assumption. Then, a novel least square value iteration type algorithm, DR-CCE-LSI, with exploration bonus devised specifically for multiple agents, is proposed to find an episilon-approximate robust Coarse Correlated Equilibrium(CCE). To obtain sample efficient learning, we find that: when the feature mapping function satisfies certain properties, our algorithm, DR-CCE-LSI, is able to achieve epsilon-approximate CCE with a regret bound of O{dHmin{H,1/min{sigma_i}}sqrt{K}}, where K is the number of interacting episodes, H is the horizon length, d is the feature dimension, and simga_i represents the uncertainty level of player i. Our work introduces the first sample-efficient algorithm for this setting, matches the best result so far in single agent setting, and achieves minimax optimalsample complexity in terms of the feature dimension d. Meanwhile, we also conduct simulation study to validate the efficacy of our algorithm in learning a robust equilibrium.
Problem

Research questions and friction points this paper is trying to address.

Addresses sim-to-real performance gap in multi-agent reinforcement learning systems
Develops sample-efficient algorithm for robust equilibrium under environment shifts
Solves distributionally robust Markov games with linear function approximation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distributionally robust online Markov game framework
DR-CCE-LSI algorithm with exploration bonus
Linear function approximation for sample efficiency
πŸ”Ž Similar Papers
No similar papers found.
Z
Zewu Zheng
Department of Statistics and Data Science, The Chinese University of Hongkong
Yuanyuan Lin
Yuanyuan Lin
The Chinese University of Hong Kong
Statistics