Improving Diffusion Planners by Self-Supervised Action Gating with Energies

📅 2026-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion-based planners in offline reinforcement learning are prone to selecting dynamically inconsistent trajectories under value guidance, leading to fragile execution. To address this issue, this work proposes SAGE, a method that leverages self-supervised learning during inference to construct a latent consistency criterion without requiring environment interaction or policy retraining. Operating solely on offline data, SAGE re-ranks candidate trajectories by integrating a Joint Embedding Predictive Architecture (JEPA) with an action-conditional latent predictor. It generates an energy score based on latent prediction error and fuses this score with value estimates to select actions that are both high-value and dynamically consistent. Evaluated across locomotion, navigation, and manipulation tasks, SAGE significantly enhances the robustness and performance of diffusion planners.

Technology Category

Application Category

📝 Abstract
Diffusion planners are a strong approach for offline reinforcement learning, but they can fail when value-guided selection favours trajectories that score well yet are locally inconsistent with the environment dynamics, resulting in brittle execution. We propose Self-supervised Action Gating with Energies (SAGE), an inference-time re-ranking method that penalises dynamically inconsistent plans using a latent consistency signal. SAGE trains a Joint-Embedding Predictive Architecture (JEPA) encoder on offline state sequences and an action-conditioned latent predictor for short horizon transitions. At test time, SAGE assigns each sampled candidate an energy given by its latent prediction error and combines this feasibility score with value estimates to select actions. SAGE can integrate into existing diffusion planning pipelines that can sample trajectories and select actions via value scoring; it requires no environment rollouts and no policy re-training. Across locomotion, navigation, and manipulation benchmarks, SAGE improves the performance and robustness of diffusion planners.
Problem

Research questions and friction points this paper is trying to address.

diffusion planners
offline reinforcement learning
dynamic inconsistency
trajectory selection
execution brittleness
Innovation

Methods, ideas, or system contributions that make the work stand out.

diffusion planning
self-supervised learning
latent consistency
offline reinforcement learning
action gating
🔎 Similar Papers
No similar papers found.