π€ AI Summary
This study addresses the robustness of vision-language-action (VLA) models when executing instructions with false premisesβe.g., natural language commands referencing non-existent objects or infeasible conditions. Existing models often fail catastrophically, either executing blindly or rejecting such instructions without recourse. To overcome this, we propose Instruct-Verify-and-Act (IVA), the first unified framework enabling joint detection of unexecutable instructions, generation of linguistically grounded clarifications, and recommendation of perceptually grounded, executable alternatives. IVA leverages multimodal large models fine-tuned via structured prompt engineering and context-augmented semi-synthetic data, jointly modeling visual perception, linguistic reasoning, and action planning. Experiments under large-scale instruction tuning show that IVA improves false-premise detection accuracy by 97.56% and increases successful response rate under erroneous conditions by 50.78%, substantially enhancing the reliability and trustworthiness of VLA models in open, dynamic environments.
π Abstract
Recently, Vision-Language-Action (VLA) models have demonstrated strong performance on a range of robotic tasks. These models rely on multimodal inputs, with language instructions playing a crucial role -- not only in predicting actions, but also in robustly interpreting user intent, even when the requests are impossible to fulfill. In this work, we investigate how VLAs can recognize, interpret, and respond to false-premise instructions: natural language commands that reference objects or conditions absent from the environment. We propose Instruct-Verify-and-Act (IVA), a unified framework that (i) detects when an instruction cannot be executed due to a false premise, (ii) engages in language-based clarification or correction, and (iii) grounds plausible alternatives in perception and action. Towards this end, we construct a large-scale instruction tuning setup with structured language prompts and train a VLA model capable of handling both accurate and erroneous requests. Our approach leverages a contextually augmented, semi-synthetic dataset containing paired positive and false-premise instructions, enabling robust detection and natural language correction. Our experiments show that IVA improves false premise detection accuracy by 97.56% over baselines, while increasing successful responses in false-premise scenarios by 50.78%.