🤖 AI Summary
This work addresses epistemic uncertainty in offline reinforcement learning arising from limited data coverage or behavioral policy bias by proposing a unified and generalizable framework. Departing from conventional large-scale ensemble architectures, it uniquely integrates a compact Q-value uncertainty set with the Epinet architecture, optimizing cumulative rewards under a robust Bellman objective while explicitly disentangling epistemic from aleatoric uncertainty. By incorporating a risk-sensitive behavioral policy and introducing a new benchmark evaluation protocol, the proposed framework consistently outperforms existing ensemble-based methods across both tabular and continuous-state tasks, demonstrating superior robustness and generalization capabilities.
📝 Abstract
Offline reinforcement learning learns policies from fixed datasets without further environment interaction. A key challenge in this setting is epistemic uncertainty, arising from limited or biased data coverage, particularly when the behavior policy systematically avoids certain actions. This can lead to inaccurate value estimates and unreliable generalization. Ensemble-based methods like SAC-N mitigate this by conservatively estimating Q-values using the ensemble minimum, but they require large ensembles and often conflate epistemic with aleatoric uncertainty. To address these limitations, we propose a unified and generalizable framework that replaces discrete ensembles with compact uncertainty sets over Q-values. %We further introduce an Epinet based model that directly shapes the uncertainty sets to optimize the cumulative reward under the robust Bellman objective without relying on ensembles. We also introduce a benchmark for evaluating offline RL algorithms under risk-sensitive behavior policies, and demonstrate that our method achieves improved robustness and generalization over ensemble-based baselines across both tabular and continuous state domains.