Your Vision-Language-Action Model Already Has Attention Heads For Path Deviation Detection

๐Ÿ“… 2026-03-14
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the susceptibility of Vision-Language-Action (VLA) models to trajectory deviations caused by visual reasoning hallucinations during navigation. While existing approaches rely on additional training or complex heuristics, this study reveals for the first time that VLA models inherently contain a โ€œnavigation headโ€ with spatiotemporal causal awareness. By monitoring just three frozen attention heads within this module, the authors construct a training-free, real-time deviation detection framework. Upon detecting anomalies, the system seamlessly switches to a lightweight reinforcement learning policy for recovery. Evaluated on a physical robot, the method achieves a 44.6% true positive detection rate with only an 11.7% false positive rate, demonstrating its efficiency, practicality, and robustness.

Technology Category

Application Category

๐Ÿ“ Abstract
Vision-Language-Action (VLA) models have demonstrated strong potential for predicting semantic actions in navigation tasks, demonstrating the ability to reason over complex linguistic instructions and visual contexts. However, they are fundamentally hindered by visual-reasoning hallucinations that lead to trajectory deviations. Addressing this issue has conventionally required training external critic modules or relying on complex uncertainty heuristics. In this work, we discover that monitoring a few attention heads within a frozen VLA model can accurately detect path deviations without incurring additional computational overhead. We refer to these heads, which inherently capture the spatiotemporal causality between historical visual sequences and linguistic instructions, as Navigation Heads. Using these heads, we propose an intuitive, training-free anomaly-detection framework that monitors their signals to detect hallucinations in real time. Surprisingly, among over a thousand attention heads, a combination of just three is sufficient to achieve a 44.6 % deviation detection rate with a low false-positive rate of 11.7 %. Furthermore, upon detecting a deviation, we bypass the heavy VLA model and trigger a lightweight Reinforcement Learning (RL) policy to safely execute a shortest-path rollback. By integrating this entire detection-to-recovery pipeline onto a physical robot, we demonstrate its practical robustness. All source code will be publicly available.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language-Action models
path deviation
visual-reasoning hallucinations
trajectory deviation
navigation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-Language-Action models
attention heads
path deviation detection
hallucination monitoring
training-free anomaly detection
๐Ÿ”Ž Similar Papers
No similar papers found.