π€ AI Summary
This work addresses key limitations of existing vision-language model (VLM)-based reflexive planning approaches in complex, long-horizon robotic tasksβnamely, inefficient state-value learning, evaluation of only a single trajectory, and high inference latency. The authors propose a test-time computation framework that decouples state evaluation from action generation, explicitly modeling the advantage of action plans in reducing distance to the goal. By integrating multi-path beam search, the method enhances the robustness of long-term reward estimation. Additionally, a lightweight confidence-triggering mechanism enables early termination of the reflexive process when predictions are deemed reliable. Evaluated on unseen multi-stage manipulation tasks, the approach achieves a 24.6% absolute improvement in success rate over the current best baseline while reducing inference time by 56.5%.
π Abstract
Solving complex, long-horizon robotic manipulation tasks requires a deep understanding of physical interactions, reasoning about their long-term consequences, and precise high-level planning. Vision-Language Models (VLMs) offer a general perceive-reason-act framework for this goal. However, previous approaches using reflective planning to guide VLMs in correcting actions encounter significant limitations. These methods rely on inefficient and often inaccurate implicit learning of state-values from noisy foresight predictions, evaluate only a single greedy future, and suffer from substantial inference latency. To address these limitations, we propose a novel test-time computation framework that decouples state evaluation from action generation. This provides a more direct and fine-grained supervisory signal for robust decision-making. Our method explicitly models the advantage of an action plan, quantified by its reduction in distance to the goal, and uses a scalable critic to estimate. To address the stochastic nature of single-trajectory evaluation, we employ beam search to explore multiple future paths and aggregate them during decoding to model their expected long-term returns, leading to more robust action generation. Additionally, we introduce a lightweight, confidence-based trigger that allows for early exit when direct predictions are reliable, invoking reflection only when necessary. Extensive experiments on diverse, unseen multi-stage robotic manipulation tasks demonstrate a 24.6% improvement in success rate over state-of-the-art baselines, while significantly reducing inference time by 56.5%.