🤖 AI Summary
This work addresses the pervasive issue of action hallucination in generative vision-language-action models for robotic policy, which often violates physical constraints and leads to planning failures. It systematically uncovers the mechanistic origins of this problem, identifying three structural mismatches between latent variable models and feasible behaviors: topological, precision-related, and temporal. The study further elucidates an inherent trade-off between expressive capacity and reliability arising from these mismatches. By establishing a physics-aware constraint verification framework and a method for analyzing structural barriers, the paper proposes a principled approach to enhance policy trustworthiness without compromising expressiveness. This contribution provides both theoretical grounding and mechanistic insight into improving the robustness of generative robotic policies.
📝 Abstract
Robot Foundation Models such as Vision-Language-Action models are rapidly reshaping how robot policies are trained and deployed, replacing hand-designed planners with end-to-end generative action models. While these systems demonstrate impressive generalization, it remains unclear whether they fundamentally resolve the long-standing challenges of robotics. We address this question by analyzing action hallucinations that violate physical constraints and their extension to plan-level failures. Focusing on latent-variable generative policies, we show that hallucinations often arise from structural mismatches between feasible robot behavior and common model architectures. We study three such barriers -- topological, precision, and horizon -- and show how they impose unavoidable tradeoffs. Our analysis provides mechanistic explanations for reported empirical failures of generative robot policies and suggests principled directions for improving reliability and trustworthiness, without abandoning their expressive power.