π€ AI Summary
Existing vision-and-language navigation methods passively process redundant visual inputs and fail to differentiate relevant historical context, leading to inefficient perception and fragmented reasoning. To address this, this work proposes ProFocus, a novel framework that, for the first time, enables active perception and focused reasoning without any task-specific training by synergizing large language models (LLMs) and vision-language models (VLMs). ProFocus constructs a structured egocentric semantic map to generate goal-directed visual queries and introduces Branch-Diverse Monte Carlo Tree Search (BD-MCTS) to select high-value waypoints while emphasizing critical historical context. The method achieves state-of-the-art performance under zero-shot settings on both the R2R and REVERIE benchmarks.
π Abstract
Vision-and-Language Navigation (VLN) requires agents to accurately perceive complex visual environments and reason over navigation instructions and histories. However, existing methods passively process redundant visual inputs and treat all historical contexts indiscriminately, resulting in inefficient perception and unfocused reasoning. To address these challenges, we propose \textbf{ProFocus}, a training-free progressive framework that unifies \underline{Pro}active Perception and \underline{Focus}ed Reasoning through collaboration between large language models (LLMs) and vision-language models (VLMs). For proactive perception, ProFocus transforms panoramic observations into structured ego-centric semantic maps, enabling the orchestration agent to identify missing visual information needed for reliable decision-making, and to generate targeted visual queries with corresponding focus regions that guide the perception agent to acquire the required observations. For focused reasoning, we propose Branch-Diverse Monte Carlo Tree Search (BD-MCTS) to identify top-$k$ high-value waypoints from extensive historical candidates. The decision agent focuses reasoning on the historical contexts associated with these waypoints, rather than considering all historical waypoints equally. Extensive experiments validate the effectiveness of ProFocus, achieving state-of-the-art performance among zero-shot methods on R2R and REVERIE benchmarks.