🤖 AI Summary
To address the weak generalization of vision-language navigation (VLN) models caused by scarcity of real-world path annotations, this paper proposes a counterfactual reasoning–based data augmentation paradigm. The core innovation lies in formulating counterfactual inference as an adversarial-driven path sampling mechanism, introducing a model-agnostic Adversarial Path Sampler (APS) that automatically generates high-challenge, semantically consistent navigation trajectories. These trajectories support both reinforcement training of navigation agents and pre-exploration of unseen environments. The method integrates adversarial training, counterfactual data augmentation, and cross-modal instruction–vision alignment modeling. Evaluated on the R2R benchmark, it consistently improves multiple baseline models: success rate on unseen environments increases by 3.2%, and SPL (Success weighted by Path Length) rises by 2.8%, demonstrating significant gains in robustness and generalization capability.
📝 Abstract
Vision-and-Language Navigation (VLN) is a task where agents must decide how to move through a 3D environment to reach a goal by grounding natural language instructions to the visual surroundings. One of the problems of the VLN task is data scarcity since it is difficult to collect enough navigation paths with human-annotated instructions for interactive environments. In this paper, we explore the use of counterfactual thinking as a human-inspired data augmentation method that results in robust models. Counterfactual thinking is a concept that describes the human propensity to create possible alternatives to life events that have already occurred. We propose an adversarial-driven counterfactual reasoning model that can consider effective conditions instead of low-quality augmented data. In particular, we present a model-agnostic adversarial path sampler (APS) that learns to sample challenging paths that force the navigator to improve based on the navigation performance. APS also serves to do pre-exploration of unseen environments to strengthen the model's ability to generalize. We evaluate the influence of APS on the performance of different VLN baseline models using the room-to-room dataset (R2R). The results show that the adversarial training process with our proposed APS benefits VLN models under both seen and unseen environments. And the pre-exploration process can further gain additional improvements under unseen environments.