Model-Based Exploration in Monitored Markov Decision Processes

📅 2025-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In real-world reinforcement learning, rewards are often partially unobservable (e.g., due to missing human feedback or sensor failures), rendering standard non-Markovian decision process (Non-MDP) algorithms inadequate—suffering from four key limitations: ignoring problem structure, failing to leverage known monitoring mechanisms, lacking worst-case guarantees without specialized initialization, and offering only asymptotic convergence. This paper proposes the first model-based algorithm with finite-sample performance bounds and provable worst-case convergence guarantees. It introduces a novel dual-instance MBIE framework: one instance ensures robust estimation of observable rewards, while the other drives optimal policy learning; crucially, it explicitly incorporates known monitoring structure, eliminating reliance on favorable initialization. Evaluated on 20+ benchmark tasks, the method significantly accelerates convergence—especially when monitoring mechanisms are known. Theoretically, it converges to the worst-case optimal policy.

Technology Category

Application Category

📝 Abstract
A tenet of reinforcement learning is that rewards are always observed by the agent. However, this is not true in many realistic settings, e.g., a human observer may not always be able to provide rewards, a sensor to observe rewards may be limited or broken, or rewards may be unavailable during deployment. Monitored Markov decision processes (Mon-MDPs) have recently been proposed as a model of such settings. Yet, Mon-MDP algorithms developed thus far do not fully exploit the problem structure, cannot take advantage of a known monitor, have no worst-case guarantees for ``unsolvable'' Mon-MDPs without specific initialization, and only have asymptotic proofs of convergence. This paper makes three contributions. First, we introduce a model-based algorithm for Mon-MDPs that addresses all of these shortcomings. The algorithm uses two instances of model-based interval estimation, one to guarantee that observable rewards are indeed observed, and another to learn the optimal policy. Second, empirical results demonstrate these advantages, showing faster convergence than prior algorithms in over two dozen benchmark settings, and even more dramatic improvements when the monitor process is known. Third, we present the first finite-sample bound on performance and show convergence to an optimal worst-case policy when some rewards are never observable.
Problem

Research questions and friction points this paper is trying to address.

Addressing reward unobservability in Monitored Markov Decision Processes
Developing model-based algorithm with worst-case performance guarantees
Ensuring convergence to optimal policy despite limited reward observability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Model-based algorithm using interval estimation
Two instances for reward observation and policy learning
Finite-sample bound with worst-case optimal convergence
🔎 Similar Papers
No similar papers found.