๐ค AI Summary
This paper addresses the problem of generating accurate, contextually coherent navigation instructions solely from first-person initial and goal imagesโwithout requiring semantic annotations or structured environmental priors such as maps. To this end, we propose a joint vision-prediction and instruction-generation framework that innovatively incorporates both single-step and interleaved multimodal reasoning mechanisms to emulate human-like incremental spatial reasoning. Our approach is built upon an autoregressive multimodal large language model, integrating visual state prediction, cross-modal alignment training, and end-to-end instruction synthesis. Evaluated on our newly constructed R2R-Goal dataset, the method achieves significant improvements in BLEU-4 and CIDEr scores over prior state-of-the-art approaches and demonstrates strong cross-domain generalization capability.
๐ Abstract
We introduce Goal-Conditioned Visual Navigation Instruction Generation (GoViG), a new task that aims to autonomously generate precise and contextually coherent navigation instructions solely from egocentric visual observations of initial and goal states. Unlike conventional approaches that rely on structured inputs such as semantic annotations or environmental maps, GoViG exclusively leverages raw egocentric visual data, substantially improving its adaptability to unseen and unstructured environments. Our method addresses this task by decomposing it into two interconnected subtasks: (1) visual forecasting, which predicts intermediate visual states bridging the initial and goal views; and (2) instruction generation, which synthesizes linguistically coherent instructions grounded in both observed and anticipated visuals. These subtasks are integrated within an autoregressive multimodal large language model trained with tailored objectives to ensure spatial accuracy and linguistic clarity. Furthermore, we introduce two complementary multimodal reasoning strategies, one-pass and interleaved reasoning, to mimic incremental human cognitive processes during navigation. To evaluate our method, we propose the R2R-Goal dataset, combining diverse synthetic and real-world trajectories. Empirical results demonstrate significant improvements over state-of-the-art methods, achieving superior BLEU-4 and CIDEr scores along with robust cross-domain generalization.