🤖 AI Summary
To address uncertainty-aware prediction and decision-making in partially observable stochastic systems, this paper proposes Diffusion-Informed MPC—a novel framework that integrates diffusion models into model predictive control (MPC) for the first time, enabling end-to-end uncertainty propagation modeling. The method supports probabilistic time-series forecasting under non-Gaussian and arbitrarily structured noise, combining Bayesian trajectory sampling with uncertainty-constrained receding-horizon optimization. Evaluated on a battery energy storage arbitrage task in the New York State day-ahead electricity market, it achieves 17.3% and 22.8% higher revenue than statistical-forecasting-based MPC and model-free reinforcement learning baselines, respectively, while significantly improving robustness and risk management. Its core contribution lies in overcoming traditional MPC’s reliance on deterministic or Gaussian assumptions, establishing a scalable, probabilistic closed-loop decision-making paradigm.
📝 Abstract
We propose Diffusion-Informed Model Predictive Control (D-I MPC), a generic framework for uncertainty-aware prediction and decision-making in partially observable stochastic systems by integrating diffusion-based time series forecasting models in Model Predictive Control algorithms. In our approach, a diffusion-based time series forecasting model is used to probabilistically estimate the evolution of the system's stochastic components. These forecasts are then incorporated into MPC algorithms to estimate future trajectories and optimize action selection under the uncertainty of the future. We evaluate the framework on the task of energy arbitrage, where a Battery Energy Storage System participates in the day-ahead electricity market of the New York state. Experimental results indicate that our model-based approach with a diffusion-based forecaster significantly outperforms both implementations with classical forecasting methods and model-free reinforcement learning baselines.