🤖 AI Summary
Diffusion-based planners in offline reinforcement learning are prone to selecting dynamically inconsistent trajectories under value guidance, leading to fragile execution. To address this issue, this work proposes SAGE, a method that leverages self-supervised learning during inference to construct a latent consistency criterion without requiring environment interaction or policy retraining. Operating solely on offline data, SAGE re-ranks candidate trajectories by integrating a Joint Embedding Predictive Architecture (JEPA) with an action-conditional latent predictor. It generates an energy score based on latent prediction error and fuses this score with value estimates to select actions that are both high-value and dynamically consistent. Evaluated across locomotion, navigation, and manipulation tasks, SAGE significantly enhances the robustness and performance of diffusion planners.
📝 Abstract
Diffusion planners are a strong approach for offline reinforcement learning, but they can fail when value-guided selection favours trajectories that score well yet are locally inconsistent with the environment dynamics, resulting in brittle execution. We propose Self-supervised Action Gating with Energies (SAGE), an inference-time re-ranking method that penalises dynamically inconsistent plans using a latent consistency signal. SAGE trains a Joint-Embedding Predictive Architecture (JEPA) encoder on offline state sequences and an action-conditioned latent predictor for short horizon transitions. At test time, SAGE assigns each sampled candidate an energy given by its latent prediction error and combines this feasibility score with value estimates to select actions. SAGE can integrate into existing diffusion planning pipelines that can sample trajectories and select actions via value scoring; it requires no environment rollouts and no policy re-training. Across locomotion, navigation, and manipulation benchmarks, SAGE improves the performance and robustness of diffusion planners.