Is Pure Exploitation Sufficient in Exogenous MDPs with Linear Function Approximation?

πŸ“… 2026-01-28
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge of learning in exogenous Markov decision processes (Exo-MDPs), where traditional approaches rely on explicit exploration to ensure performance. The paper introduces the first purely exploitative learning framework (PEL) that operates without any exploration or tabular assumptions, leveraging two novel tools: counterfactual trajectory analysis and Bellman closure-preserving feature transfer. Theoretically, under the tabular setting, PEL achieves a finite-sample regret bound of $\widetilde{O}(H^2|\Xi|\sqrt{K})$; under linear function approximation, it attains a polynomial regret bound independent of the size of the endogenous state-action space. Empirical evaluations demonstrate that PEL outperforms existing baseline methods, highlighting its effectiveness in practical scenarios.

Technology Category

Application Category

πŸ“ Abstract
Exogenous MDPs (Exo-MDPs) capture sequential decision-making where uncertainty comes solely from exogenous inputs that evolve independently of the learner's actions. This structure is especially common in operations research applications such as inventory control, energy storage, and resource allocation, where exogenous randomness (e.g., demand, arrivals, or prices) drives system behavior. Despite decades of empirical evidence that greedy, exploitation-only methods work remarkably well in these settings, theory has lagged behind: all existing regret guarantees for Exo-MDPs rely on explicit exploration or tabular assumptions. We show that exploration is unnecessary. We propose Pure Exploitation Learning (PEL) and prove the first general finite-sample regret bounds for exploitation-only algorithms in Exo-MDPs. In the tabular case, PEL achieves $\widetilde{O}(H^2|\Xi|\sqrt{K})$. For large, continuous endogenous state spaces, we introduce LSVI-PE, a simple linear-approximation method whose regret is polynomial in the feature dimension, exogenous state space, and horizon, independent of the endogenous state and action spaces. Our analysis introduces two new tools: counterfactual trajectories and Bellman-closed feature transport, which together allow greedy policies to have accurate value estimates without optimism. Experiments on synthetic and resource-management tasks show that PEL consistently outperforming baselines. Overall, our results overturn the conventional wisdom that exploration is required, demonstrating that in Exo-MDPs, pure exploitation is enough.
Problem

Research questions and friction points this paper is trying to address.

Exogenous MDPs
Pure Exploitation
Linear Function Approximation
Regret Bounds
Exploration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Exogenous MDPs
Pure Exploitation
Linear Function Approximation
Regret Bounds
Exploration-Free Learning
πŸ”Ž Similar Papers
No similar papers found.