🤖 AI Summary
In partially observable Markov decision processes (POMDPs), the quality of history representations fundamentally limits performance on long-horizon dependency tasks. This paper proposes DRL², a framework that decouples history representation learning from policy optimization. We systematically demonstrate that a single self-supervised auxiliary task—predicting future observations—is sufficient to learn highly generalizable history encodings. Theoretically and empirically, we establish prediction accuracy as a reliable proxy metric for representation quality. Across multiple long-memory benchmarks—including T-Maze and Memory Maze—DRL² consistently improves policy performance across diverse neural architectures, with robust and reproducible gains. Our key contribution is the first rigorous demonstration that future prediction alone suffices to drive high-fidelity history modeling, establishing a simple yet effective new paradigm for representation learning in POMDPs.
📝 Abstract
Learning good representations of historical contexts is one of the core challenges of reinforcement learning (RL) in partially observable environments. While self-predictive auxiliary tasks have been shown to improve performance in fully observed settings, their role in partial observability remains underexplored. In this empirical study, we examine the effectiveness of self-predictive representation learning via future prediction, i.e., predicting next-step observations as an auxiliary task for learning history representations, especially in environments with long-term dependencies. We test the hypothesis that future prediction alone can produce representations that enable strong RL performance. To evaluate this, we introduce $ exttt{DRL}^2$, an approach that explicitly decouples representation learning from reinforcement learning, and compare this approach to end-to-end training across multiple benchmarks requiring long-term memory. Our findings provide evidence that this hypothesis holds across different network architectures, reinforcing the idea that future prediction performance serves as a reliable indicator of representation quality and contributes to improved RL performance.