🤖 AI Summary
Reinforcement learning (RL) with multi-step future state prediction faces challenges: conventional single-step MDP formulations fail to effectively leverage high-dimensional multi-step predictions, and existing theoretical frameworks lack rigorous treatment of practical constraints such as prediction errors and incomplete action coverage. Method: We propose a Bayesian value function modeling framework coupled with Bellman–Jensen Gap analysis—enabling the first formal characterization of policy learnability under erroneous multi-step predictions. Building on this, we design BOLA, a two-stage algorithm integrating model-based prediction, Bayesian inference, and online adaptation. Contribution/Results: Evaluated on synthetic MDPs and a real-world wind-power–battery-storage coordinated control task, BOLA achieves significant improvements in sample efficiency and decision-making performance, empirically validating both the theoretical soundness and engineering applicability of our approach.
📝 Abstract
Traditional reinforcement learning (RL) assumes the agents make decisions based on Markov decision processes (MDPs) with one-step transition models. In many real-world applications, such as energy management and stock investment, agents can access multi-step predictions of future states, which provide additional advantages for decision making. However, multi-step predictions are inherently high-dimensional: naively embedding these predictions into an MDP leads to an exponential blow-up in state space and the curse of dimensionality. Moreover, existing RL theory provides few tools to analyze prediction-augmented MDPs, as it typically works on one-step transition kernels and cannot accommodate multi-step predictions with errors or partial action-coverage. We address these challenges with three key innovations: First, we propose the emph{Bayesian value function} to characterize the optimal prediction-aware policy tractably. Second, we develop a novel emph{Bellman-Jensen Gap} analysis on the Bayesian value function, which enables characterizing the value of imperfect predictions. Third, we introduce BOLA (Bayesian Offline Learning with Online Adaptation), a two-stage model-based RL algorithm that separates offline Bayesian value learning from lightweight online adaptation to real-time predictions. We prove that BOLA remains sample-efficient even under imperfect predictions. We validate our theory and algorithm on synthetic MDPs and a real-world wind energy storage control problem.