🤖 AI Summary
Traditional Markov reinforcement learning fails on history-dependent decision tasks—where success depends on the full system trajectory rather than individual states—while existing non-Markovian reward decision process (NMRDP) methods lack sample efficiency and near-optimality guarantees.
Method: We propose the first model-based RL framework for NMRDPs with PAC (Probably Approximately Correct) guarantees. It decouples transition dynamics and non-Markovian rewards via a reward machine, enabling provably efficient learning. For discrete-action NMRDPs, we establish the first polynomial-sample-complexity guarantee for ε-optimal policy convergence. Furthermore, we design Bucket-QR-MAX, integrating SimHash-based state discretization to achieve generalization over continuous states without manual binning or function approximation.
Results: Experiments across diverse temporal-dependency tasks demonstrate significantly improved sample efficiency, stable convergence to optimal policies, and consistent superiority over state-of-the-art model-based RL baselines.
📝 Abstract
Many practical decision-making problems involve tasks whose success depends on the entire system history, rather than on achieving a state with desired properties. Markovian Reinforcement Learning (RL) approaches are not suitable for such tasks, while RL with non-Markovian reward decision processes (NMRDPs) enables agents to tackle temporal-dependency tasks. This approach has long been known to lack formal guarantees on both (near-)optimality and sample efficiency. We contribute to solving both issues with QR-MAX, a novel model-based algorithm for discrete NMRDPs that factorizes Markovian transition learning from non-Markovian reward handling via reward machines. To the best of our knowledge, this is the first model-based RL algorithm for discrete-action NMRDPs that exploits this factorization to obtain PAC convergence to $varepsilon$-optimal policies with polynomial sample complexity. We then extend QR-MAX to continuous state spaces with Bucket-QR-MAX, a SimHash-based discretiser that preserves the same factorized structure and achieves fast and stable learning without manual gridding or function approximation. We experimentally compare our method with modern state-of-the-art model-based RL approaches on environments of increasing complexity, showing a significant improvement in sample efficiency and increased robustness in finding optimal policies.