Policy Regularized Distributionally Robust Markov Decision Processes with Linear Function Approximation

📅 2025-10-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address robust decision-making in online reinforcement learning under distributional shifts, this paper proposes DR-RPO, a model-free policy optimization algorithm. DR-RPO is the first to integrate policy optimization with distributionally robust Markov decision process (MDP) theory, employing reference-policy regularization and a dual-constraint mechanism to jointly restrict both the policy space and transition dynamics, thereby enabling robust policy learning under adversarial environmental changes. Built upon the *d*-rectangular linear MDP framework, it combines linear function approximation, upper-confidence-bound (UCB) reward estimation, and policy gradients to support optimistic exploration in large state-action spaces. Theoretically, DR-RPO achieves sublinear regret and polynomial sample complexity. Empirically, it significantly outperforms existing methods across diverse distribution-shift scenarios, demonstrating both high robustness and superior sample efficiency.

Technology Category

Application Category

📝 Abstract
Decision-making under distribution shift is a central challenge in reinforcement learning (RL), where training and deployment environments differ. We study this problem through the lens of robust Markov decision processes (RMDPs), which optimize performance against adversarial transition dynamics. Our focus is the online setting, where the agent has only limited interaction with the environment, making sample efficiency and exploration especially critical. Policy optimization, despite its success in standard RL, remains theoretically and empirically underexplored in robust RL. To bridge this gap, we propose extbf{D}istributionally extbf{R}obust extbf{R}egularized extbf{P}olicy extbf{O}ptimization algorithm (DR-RPO), a model-free online policy optimization method that learns robust policies with sublinear regret. To enable tractable optimization within the softmax policy class, DR-RPO incorporates reference-policy regularization, yielding RMDP variants that are doubly constrained in both transitions and policies. To scale to large state-action spaces, we adopt the $d$-rectangular linear MDP formulation and combine linear function approximation with an upper confidence bonus for optimistic exploration. We provide theoretical guarantees showing that policy optimization can achieve polynomial suboptimality bounds and sample efficiency in robust RL, matching the performance of value-based approaches. Finally, empirical results across diverse domains corroborate our theory and demonstrate the robustness of DR-RPO.
Problem

Research questions and friction points this paper is trying to address.

Addresses decision-making under distribution shift in reinforcement learning
Proposes online policy optimization for robust Markov decision processes
Scales robust RL to large spaces using linear function approximation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Policy regularization for robust Markov decision processes
Linear function approximation for large state spaces
Upper confidence bonus for optimistic exploration
🔎 Similar Papers
No similar papers found.