🤖 AI Summary
This work addresses the challenge of multi-step lookahead reinforcement learning, where an agent can observe a sequence of future state transitions and rewards before making a decision. To overcome the limitations of fixed-horizon batching and model predictive control—as well as the NP-hardness of computing optimal policies—we propose an adaptive batching policy (ABP) that dynamically partitions lookahead information based on the current state to optimize decision-making. We establish, for the first time, the optimal Bellman equation for ABP and design an optimistic regret-minimization algorithm that efficiently learns the optimal policy in unknown environments. Theoretical analysis shows that the algorithm achieves a minimax-optimal regret bound up to a multiplicative factor equal to the lookahead horizon, which is typically a small constant.
📝 Abstract
We study tabular reinforcement learning problems with multiple steps of lookahead information. Before acting, the learner observes $\ell$ steps of future transition and reward realizations: the exact state the agent would reach and the rewards it would collect under any possible course of action. While it has been shown that such information can drastically boost the value, finding the optimal policy is NP-hard, and it is common to apply one of two tractable heuristics: processing the lookahead in chunks of predefined sizes ('fixed batching policies'), and model predictive control. We first illustrate the problems with these two approaches and propose utilizing the lookahead in adaptive (state-dependent) batches; we refer to such policies as adaptive batching policies (ABPs). We derive the optimal Bellman equations for these strategies and design an optimistic regret-minimizing algorithm that enables learning the optimal ABP when interacting with unknown environments. Our regret bounds are order-optimal up to a potential factor of the lookahead horizon $\ell$, which can usually be considered a small constant.