π€ AI Summary
To address the prevalence of spatial description errors in human instructions and the insufficient robustness of existing models in Vision-and-Language Navigation with Command Errors (VLN-CE), this paper introduces the first benchmark dataset featuring diverse, human-introduced instruction errors. It formally defines the Instruction Error Detection and Localization task and proposes the first cross-modal error localization method. Our approach leverages a cross-modal Transformer to achieve fine-grained alignment between visual trajectories and linguistic instructions while explicitly modeling anomalies. We further design a principled evaluation paradigm for instruction error robustness. Experiments reveal that R2R-CE and RxR-CE validation sets contain numerous unannotated errors. Under the new benchmark, state-of-the-art methods suffer up to 25% absolute success rate degradation, whereas our method significantly outperforms all baselines on both error detection and localization. Moreover, it successfully uncovers latent annotation errors in two mainstream VLN datasets.
π Abstract
Vision-and-Language Navigation in Continuous Environments (VLN-CE) is one of the most intuitive yet challenging embodied AI tasks. Agents are tasked to navigate towards a target goal by executing a set of low-level actions, following a series of natural language instructions. All VLN-CE methods in the literature assume that language instructions are exact. However, in practice, instructions given by humans can contain errors when describing a spatial environment due to inaccurate memory or confusion. Current VLN-CE benchmarks do not address this scenario, making the state-of-the-art methods in VLN-CE fragile in the presence of erroneous instructions from human users. For the first time, we propose a novel benchmark dataset that introduces various types of instruction errors considering potential human causes. This benchmark provides valuable insight into the robustness of VLN systems in continuous environments. We observe a noticeable performance drop (up to β25%) in Success Rate when evaluating the state-of-the-art VLN-CE methods on our benchmark. Moreover, we formally define the task of Instruction Error Detection and Localization, and establish an evaluation protocol on top of our benchmark dataset. We also propose an effective method, based on a cross-modal transformer architecture, that achieves the best performance in error detection and localization, compared to baselines. Surprisingly, our proposed method has revealed errors in the validation set of the two commonly used datasets for VLN-CE, i.e., R2R-CE and RxR-CE, demonstrating the utility of our technique in other tasks.