🤖 AI Summary
This work proposes an inference-time adaptation framework for offline reinforcement learning that integrates a differentiable world model with model predictive control (MPC), enabling end-to-end gradient-based optimization of pretrained policy parameters through imagined trajectories. Unlike conventional offline RL approaches that deploy fixed policies incapable of leveraging environmental dynamics during inference, the proposed method overcomes the static deployment limitation by adaptively refining policies at test time. Evaluated on the D4RL benchmark—including MuJoCo locomotion and AntMaze tasks—the approach significantly outperforms strong existing baselines, demonstrating both the effectiveness and practicality of online policy optimization during inference.
📝 Abstract
Offline Reinforcement Learning (RL) aims to learn optimal policies from fixed offline datasets, without further interactions with the environment. Such methods train an offline policy (or value function), and apply it at inference time without further refinement. We introduce an inference time adaptation framework inspired by model predictive control (MPC) that utilizes a pretrained policy along with a learned world model of state transitions and rewards. While existing world model and diffusion-planning methods use learned dynamics to generate imagined trajectories during training, or to sample candidate plans at inference time, they do not use inference-time information to optimize the policy parameters on the fly. In contrast, our design is a Differentiable World Model (DWM) pipeline that enables endto-end gradient computation through imagined rollouts for policy optimization at inference time based on MPC. We evaluate our algorithm on D4RL continuous-control benchmarks (MuJoCo locomotion tasks and AntMaze), and show that exploiting inference-time information to optimize the policy parameters yields consistent gains over strong offline RL baselines.