🤖 AI Summary
To address the limited generalization capability of vision-language navigation (VLN) agents caused by scarcity of real-world trajectory data, this paper proposes WCGEN, a World-Consistent Data Generation framework. WCGEN operates in two stages: first, it constructs a 3D geometrically consistent trajectory model from point clouds to ensure spatial coherence and topological validity; second, it leverages 3D-knowledge-driven viewpoint prediction and angular synthesis to generate instruction-trajectory-observation triplets that are multi-view, rotationally consistent, and physically plausible. Crucially, WCGEN is the first VLN data generation method to jointly model spatial coherence, rotational consistency, and 3D geometric consistency—overcoming the fundamental limitation of conventional image-level augmentation approaches, which lack an explicit world model. Evaluated on multiple standard VLN benchmarks, WCGEN achieves state-of-the-art performance, significantly improving path success rate and goal-oriented success rate—particularly in unseen environments.
📝 Abstract
Vision-and-Language Navigation (VLN) is a challenging task that requires an agent to navigate through photorealistic environments following natural-language instructions. One main obstacle existing in VLN is data scarcity, leading to poor generalization performance over unseen environments. Tough data argumentation is a promising way for scaling up the dataset, how to generate VLN data both diverse and world-consistent remains problematic. To cope with this issue, we propose the world-consistent data generation (WCGEN), an efficacious data-augmentation framework satisfying both diversity and world-consistency, targeting at enhancing the generalizations of agents to novel environments. Roughly, our framework consists of two stages, the trajectory stage which leverages a point-cloud based technique to ensure spatial coherency among viewpoints, and the viewpoint stage which adopts a novel angle synthesis method to guarantee spatial and wraparound consistency within the entire observation. By accurately predicting viewpoint changes with 3D knowledge, our approach maintains the world-consistency during the generation procedure. Experiments on a wide range of datasets verify the effectiveness of our method, demonstrating that our data augmentation strategy enables agents to achieve new state-of-the-art results on all navigation tasks, and is capable of enhancing the VLN agents' generalization ability to unseen environments.