Latent Plan Transformer for Trajectory Abstraction: Planning as Latent Space Inference

📅 2024-02-07
🏛️ Neural Information Processing Systems
📈 Citations: 7
Influential: 0
📄 PDF
🤖 AI Summary
Long-horizon decision-making suffers from poor temporal consistency due to the absence of stepwise rewards, hindering reliable credit assignment and trajectory composition. Method: We propose an offline planning framework based on latent-variable bridging, modeling planning as conditional probabilistic inference in a learned latent space. Using offline RL trajectory–return pairs, we train a Transformer-based generator where a structured latent variable explicitly bridges the policy and final return—enabling end-to-end planning without per-step reward supervision. Our approach integrates latent-variable modeling, maximum-likelihood training, posterior sampling, and test-time inverse inference. Results: Evaluated on Gym-Mujoco and Franka Kitchen benchmarks, our method significantly outperforms suboptimal trajectory-based baselines, improving credit assignment accuracy, robustness to subtrajectory stitching, and cross-task generalization.

Technology Category

Application Category

📝 Abstract
In tasks aiming for long-term returns, planning becomes essential. We study generative modeling for planning with datasets repurposed from offline reinforcement learning. Specifically, we identify temporal consistency in the absence of step-wise rewards as one key technical challenge. We introduce the Latent Plan Transformer (LPT), a novel model that leverages a latent variable to connect a Transformer-based trajectory generator and the final return. LPT can be learned with maximum likelihood estimation on trajectory-return pairs. In learning, posterior sampling of the latent variable naturally integrates sub-trajectories to form a consistent abstraction despite the finite context. At test time, the latent variable is inferred from an expected return before policy execution, realizing the idea of planning as inference. Our experiments demonstrate that LPT can discover improved decisions from sub-optimal trajectories, achieving competitive performance across several benchmarks, including Gym-Mujoco, Franka Kitchen, Maze2D, and Connect Four. It exhibits capabilities in nuanced credit assignments, trajectory stitching, and adaptation to environmental contingencies. These results validate that latent variable inference can be a strong alternative to step-wise reward prompting.
Problem

Research questions and friction points this paper is trying to address.

Planning without step-wise rewards in long-term tasks
Temporal consistency in offline reinforcement learning datasets
Abstracting trajectories for improved decision-making from sub-optimal data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Latent variable connects Transformer generator and return
Maximum likelihood estimation on trajectory-return pairs
Posterior sampling integrates sub-trajectories for consistency
🔎 Similar Papers
No similar papers found.