🤖 AI Summary
This paper addresses the lack of model-free regret analysis for the Decision Estimation Coefficient (DEC) in adversarial Markov decision processes—particularly in hybrid MDPs with stochastic transitions and adversarial rewards. We propose Dig-DEC, the first model-free, non-optimistic, purely information-gain-driven DEC method. Its key contributions are: (1) the first model-free regret upper bound for hybrid MDPs; (2) elimination of optimistic assumptions, enabling compatibility with bandit feedback and fully adversarial environments; and (3) integration of online function estimation with two-timescale updates, achieving optimal $ ilde{O}(sqrt{T})$ regret under Bellman completeness—substantially improving over prior on-policy ($T^{2/3}$ vs. $T^{3/4}$) and off-policy ($T^{7/9}$ vs. $T^{5/6}$) bounds.
📝 Abstract
We study decision making with structured observation (DMSO). Previous work (Foster et al., 2021b, 2023a) has characterized the complexity of DMSO via the decision-estimation coefficient (DEC), but left a gap between the regret upper and lower bounds that scales with the size of the model class. To tighten this gap, Foster et al. (2023b) introduced optimistic DEC, achieving a bound that scales only with the size of the value-function class. However, their optimism-based exploration is only known to handle the stochastic setting, and it remains unclear whether it extends to the adversarial setting.
We introduce Dig-DEC, a model-free DEC that removes optimism and drives exploration purely by information gain. Dig-DEC is always no larger than optimistic DEC and can be much smaller in special cases. Importantly, the removal of optimism allows it to handle adversarial environments without explicit reward estimators. By applying Dig-DEC to hybrid MDPs with stochastic transitions and adversarial rewards, we obtain the first model-free regret bounds for hybrid MDPs with bandit feedback under several general transition structures, resolving the main open problem left by Liu et al. (2025).
We also improve the online function-estimation procedure in model-free learning: For average estimation error minimization, we refine the estimator in Foster et al. (2023b) to achieve sharper concentration, improving their regret bounds from $T^{3/4}$ to $T^{2/3}$ (on-policy) and from $T^{5/6}$ to $T^{7/9}$ (off-policy). For squared error minimization in Bellman-complete MDPs, we redesign their two-timescale procedure, improving the regret bound from $T^{2/3}$ to $sqrt{T}$. This is the first time a DEC-based method achieves performance matching that of optimism-based approaches (Jin et al., 2021; Xie et al., 2023) in Bellman-complete MDPs.