Act, Think or Abstain: Complexity-Aware Adaptive Inference for Vision-Language-Action Models

📅 2026-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high computational cost, imbalanced resource allocation, and lack of uncertainty estimation for out-of-distribution tasks in existing vision–language–action (VLA) models, which can lead to catastrophic failures. Inspired by human cognition, the authors propose an adaptive reasoning framework that introduces, for the first time, a complexity-aware decision mechanism into VLA models. By leveraging latent embeddings from the visual backbone, the method efficiently estimates task difficulty using only visual information and dynamically selects among three strategies: Act, Think, or Abstain. A lightweight complexity detector is constructed by combining parametric and non-parametric estimators. Experiments on LIBERO, LIBERO-PRO, and real-world robotic platforms demonstrate that the approach achieves 80% F1 score with only 5% of the training data, significantly improving both inference efficiency and robustness.

Technology Category

Application Category

📝 Abstract
Current research on Vision-Language-Action (VLA) models predominantly focuses on enhancing generalization through established reasoning techniques. While effective, these improvements invariably increase computational complexity and inference latency. Furthermore, these mechanisms are typically applied indiscriminately, resulting in the inefficient allocation of resources for trivial tasks while simultaneously failing to provide the uncertainty estimation necessary to prevent catastrophic failure on out-of-distribution tasks. Inspired by human cognition, we propose an adaptive framework that dynamically routes VLA execution based on the complexity of the perceived state. Our approach transforms the VLA's vision-language backbone into an active detection tool by projecting latent embeddings into an ensemble of parametric and non-parametric estimators. This allows the system to execute known tasks immediately (Act), reason about ambiguous scenarios (Think), and preemptively halt execution when encountering significant physical or semantic anomalies (Abstain). In our empirical analysis, we observe a phenomenon where visual embeddings alone are superior for inferring task complexity due to the semantic invariance of language. Evaluated on the LIBERO and LIBERO-PRO benchmarks as well as on a real robot, our vision-only configuration achieves 80% F1-Score using as little as 5% of training data, establishing itself as a reliable and efficient task complexity detector.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language-Action models
adaptive inference
computational complexity
uncertainty estimation
out-of-distribution tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

adaptive inference
vision-language-action models
complexity-aware routing
uncertainty estimation
abstention mechanism
🔎 Similar Papers
No similar papers found.