🤖 AI Summary
This work addresses the challenge of task failure in vision-language-action (VLA) models under few-shot demonstrations due to geometric ambiguities. To mitigate this issue, the authors propose the VGAS framework, which, during inference, leverages a high-recall VLA model to generate multiple action-chunk candidates and employs a Q-Chunk-Former evaluator augmented with explicit geometric regularization (EGR) to select the optimal candidate. By jointly preserving semantic fidelity and geometric precision, VGAS significantly improves task success rates and robustness under limited demonstrations and distributional shifts. The approach also alleviates instability in value estimation, demonstrating strong generalization capabilities and practical utility in real-world robotic manipulation scenarios.
📝 Abstract
Vision--Language--Action (VLA) models bridge multimodal reasoning with physical control, but adapting them to new tasks with scarce demonstrations remains unreliable. While fine-tuned VLA policies often produce semantically plausible trajectories, failures often arise from unresolved geometric ambiguities, where near-miss action candidates lead to divergent execution outcomes under limited supervision. We study few-shot VLA adaptation from a \emph{generation--selection} perspective and propose a novel framework \textbf{VGAS} (\textbf{V}alue-\textbf{G}uided \textbf{A}ction-chunk \textbf{S}election). It performs inference-time best-of-$N$ selection to identify action chunks that are both semantically faithful and geometrically precise. Specifically, \textbf{VGAS} employs a finetuned VLA as a high-recall proposal generator and introduces the \textrm{Q-Chunk-Former}, a geometrically grounded Transformer critic to resolve fine-grained geometric ambiguities. In addition, we propose \textit{Explicit Geometric Regularization} (\texttt{EGR}), which explicitly shapes a discriminative value landscape to preserve action ranking resolution among near-miss candidates while mitigating value instability under scarce supervision. Experiments and theoretical analysis demonstrate that \textbf{VGAS} consistently improves success rates and robustness under limited demonstrations and distribution shifts. Our code is available at https://github.com/Jyugo-15/VGAS.