Optimistic Actor-Critic with Parametric Policies for Linear Markov Decision Processes

📅 2026-03-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the gap between theory and practice in actor-critic methods for linear Markov decision processes, where existing approaches often sacrifice exploration efficiency, policy parameterization expressiveness, or practical feasibility. To bridge this divide, the paper introduces an optimistic actor-critic framework based on log-linear policies. It is the first to incorporate explicitly parameterized policies into rigorous theoretical analysis, optimizing the policy via computable logit-matching regression and achieving efficient optimistic value estimation through Langevin Monte Carlo–approximated Thompson sampling. The proposed method attains a sample complexity of $\widetilde{\mathcal{O}}(\varepsilon^{-4})$ in the on-policy setting and $\widetilde{\mathcal{O}}(\varepsilon^{-2})$ in the off-policy setting, thereby offering both theoretical optimality and practical scalability.
📝 Abstract
Although actor-critic methods have been successful in practice, their theoretical analyses have several limitations. Specifically, existing theoretical work either sidesteps the exploration problem by making strong assumptions or analyzes impractical methods with complicated algorithmic modifications. Moreover, the actor-critic methods analyzed for linear MDPs often employ natural policy gradient (NPG) and construct "implicit" policies without explicit parameterization. Such policies are computationally expensive to sample from, making the environment interactions inefficient. To that end, we focus on the finite-horizon linear MDPs and propose an optimistic actor-critic framework that uses parametric log-linear policies. In particular, we introduce a tractable \textit{logit-matching} regression objective for the actor. For the critic, we use approximate Thompson sampling via Langevin Monte Carlo to obtain optimistic value estimates. We prove that the resulting algorithm achieves $\widetilde{\mathcal{O}}(ε^{-4})$ and $\widetilde{\mathcal{O}}(ε^{-2})$ sample complexity in the on-policy and off-policy setting, respectively. Our results match prior theoretical works in achieving the state-of-the-art sample complexity, while our algorithm is more aligned with practice.
Problem

Research questions and friction points this paper is trying to address.

actor-critic
linear MDPs
exploration
parametric policies
sample complexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

optimistic actor-critic
parametric log-linear policy
logit-matching regression
approximate Thompson sampling
linear MDPs
🔎 Similar Papers
No similar papers found.
M
Max Qiushi Lin
Simon Fraser University
R
Reza Asad
Simon Fraser University
K
Kevin Tan
University of Pennsylvania
H
Haque Ishfaq
Mila, McGill University
C
Csaba Szepesvári
Google DeepMind, University of Alberta
Sharan Vaswani
Sharan Vaswani
Simon Fraser University
Machine LearningOptimizationArtificial Intelligence