Epistemic Robust Offline Reinforcement Learning

📅 2026-04-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses epistemic uncertainty in offline reinforcement learning arising from limited data coverage or behavioral policy bias by proposing a unified and generalizable framework. Departing from conventional large-scale ensemble architectures, it uniquely integrates a compact Q-value uncertainty set with the Epinet architecture, optimizing cumulative rewards under a robust Bellman objective while explicitly disentangling epistemic from aleatoric uncertainty. By incorporating a risk-sensitive behavioral policy and introducing a new benchmark evaluation protocol, the proposed framework consistently outperforms existing ensemble-based methods across both tabular and continuous-state tasks, demonstrating superior robustness and generalization capabilities.
📝 Abstract
Offline reinforcement learning learns policies from fixed datasets without further environment interaction. A key challenge in this setting is epistemic uncertainty, arising from limited or biased data coverage, particularly when the behavior policy systematically avoids certain actions. This can lead to inaccurate value estimates and unreliable generalization. Ensemble-based methods like SAC-N mitigate this by conservatively estimating Q-values using the ensemble minimum, but they require large ensembles and often conflate epistemic with aleatoric uncertainty. To address these limitations, we propose a unified and generalizable framework that replaces discrete ensembles with compact uncertainty sets over Q-values. %We further introduce an Epinet based model that directly shapes the uncertainty sets to optimize the cumulative reward under the robust Bellman objective without relying on ensembles. We also introduce a benchmark for evaluating offline RL algorithms under risk-sensitive behavior policies, and demonstrate that our method achieves improved robustness and generalization over ensemble-based baselines across both tabular and continuous state domains.
Problem

Research questions and friction points this paper is trying to address.

epistemic uncertainty
offline reinforcement learning
value estimation
generalization
behavior policy
Innovation

Methods, ideas, or system contributions that make the work stand out.

epistemic uncertainty
uncertainty sets
offline reinforcement learning
robust Bellman objective
Epinet
🔎 Similar Papers
No similar papers found.
A
Abhilash Reddy Chenreddy
GERAD, Department of Decision Sciences, HEC Montréal, Montréal, Québec H3T 2A7, Canada
Erick Delage
Erick Delage
Professor, Department of Decision Sciences, HEC Montréal
Decision making under uncertaintyrobust optimizationstochastic programmingapplied statistics