Achieving Logarithmic Regret in KL-Regularized Zero-Sum Markov Games

📅 2025-10-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper investigates low-regret learning in KL-regularized zero-sum Markov games. Addressing the limitation of conventional algorithms—achieving only $O(sqrt{T})$ regret—we propose optimistic and super-optimistic reward mechanisms, integrated with inverse KL-divergence regularization, best-response sampling, and pre-trained prior policy guidance, yielding algorithms OMG and SOMG. We theoretically establish that both algorithms achieve $O(log T)$ logarithmic regret over a finite horizon $T$, the first such result in KL-regularized Markov games. Furthermore, we quantify the trade-off between regularization strength $eta$ and sample efficiency, rigorously characterizing how KL regularization accelerates convergence and reduces sample complexity. Our analysis uncovers the fundamental role of KL regularization in enhancing learning stability and statistical efficiency, thereby establishing a new paradigm for game-theoretic reinforcement learning that combines strong theoretical guarantees with practical applicability.

Technology Category

Application Category

📝 Abstract
Reverse Kullback-Leibler (KL) divergence-based regularization with respect to a fixed reference policy is widely used in modern reinforcement learning to preserve the desired traits of the reference policy and sometimes to promote exploration (using uniform reference policy, known as entropy regularization). Beyond serving as a mere anchor, the reference policy can also be interpreted as encoding prior knowledge about good actions in the environment. In the context of alignment, recent game-theoretic approaches have leveraged KL regularization with pretrained language models as reference policies, achieving notable empirical success in self-play methods. Despite these advances, the theoretical benefits of KL regularization in game-theoretic settings remain poorly understood. In this work, we develop and analyze algorithms that provably achieve improved sample efficiency under KL regularization. We study both two-player zero-sum Matrix games and Markov games: for Matrix games, we propose OMG, an algorithm based on best response sampling with optimistic bonuses, and extend this idea to Markov games through the algorithm SOMG, which also uses best response sampling and a novel concept of superoptimistic bonuses. Both algorithms achieve a logarithmic regret in $T$ that scales inversely with the KL regularization strength $β$ in addition to the standard $widetilde{mathcal{O}}(sqrt{T})$ regret independent of $β$ which is attained in both regularized and unregularized settings
Problem

Research questions and friction points this paper is trying to address.

Achieving logarithmic regret in KL-regularized zero-sum Markov games
Developing algorithms with improved sample efficiency under KL regularization
Analyzing theoretical benefits of KL regularization in game-theoretic settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

KL regularization with reference policy
Optimistic best response sampling algorithm
Logarithmic regret scaling with regularization strength
🔎 Similar Papers
No similar papers found.