🤖 AI Summary
This work addresses the inefficiency of large language models (LLMs) in planning tasks, which stems from autoregressive generation and repeated forward passes that hinder multi-step lookahead. The authors propose EmbedPlan, a novel approach that explicitly models state transitions within a frozen language embedding space. By encoding natural language descriptions of states and actions, EmbedPlan predicts the embedding of the next state and generates outcomes via nearest-neighbor retrieval—eliminating the need for fine-tuning. The method employs a lightweight transition model and is evaluated under diverse generalization protocols, including interpolation, extrapolation, and cross-domain transfer. While it achieves near-perfect performance on in-domain interpolation across nine classical planning domains, its effectiveness drops significantly on out-of-domain or unseen problems, highlighting that current approaches excel at modeling intra-domain dynamics but still struggle with cross-domain generalization.
📝 Abstract
Planning with LLMs is bottlenecked by token-by-token generation and repeated full forward passes, making multi-step lookahead and rollout-based search expensive in latency and compute. We propose EmbedPlan, which replaces autoregressive next-state generation with a lightweight transition model operating in a frozen language embedding space. EmbedPlan encodes natural language state and action descriptions into vectors, predicts the next-state embedding, and retrieves the next state by nearest-neighbor similarity, enabling fast planning computation without fine-tuning the encoder. We evaluate next-state prediction across nine classical planning domains using six evaluation protocols of increasing difficulty: interpolation, plan-variant, extrapolation, multi-domain, cross-domain, and leave-one-out. Results show near-perfect interpolation performance but a sharp degradation when generalization requires transfer to unseen problems or unseen domains; plan-variant evaluation indicates generalization to alternative plans rather than memorizing seen trajectories. Overall, frozen embeddings support within-domain dynamics learning after observing a domain's transitions, while transfer across domain boundaries remains a bottleneck.