SeeNav-Agent: Enhancing Vision-Language Navigation with Visual Prompt and Step-Level Policy Optimization

📅 2025-12-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current LVLM-based Vision-and-Language Navigation (VLN) agents suffer from perceptual hallucinations, reasoning biases, and planning failures. To address these issues, we propose a dual-view visual prompting mechanism to suppress perceptual errors and design Step-level Reward Grouping Policy Optimization (SRGPO), which integrates verifiable process rewards with randomized step-length grouping to enable stable advantage estimation and dense feedback. Our approach significantly improves training stability and cross-scene generalization. On EmbodiedBench, GPT-4.1 achieves 86.7% success rate in zero-shot settings using our prompt—surpassing prior SOTA by ~20 percentage points; after SRGPO fine-tuning, Qwen2.5-VL-3B attains 72.3%, outperforming the previous best by 5.6 points. Our core contributions are threefold: (1) the first dual-view visual prompting strategy explicitly mitigating LVLM perceptual hallucinations in VLN; (2) the first step-level reward grouping framework for reinforcement learning in VLN; and (3) a unified solution to the coupled failure of perception–reasoning–planning coordination in VLN.

Technology Category

Application Category

📝 Abstract
Existing Vision-Language Navigation (VLN) agents based on Large Vision-Language Models (LVLMs) often suffer from perception errors, reasoning errors, and planning errors, which significantly hinder their navigation performance. To address these limitations, a novel VLN agent framework, named SeeNav-Agent, is proposed in this work. First, to reduce perception hallucinations of the visual module of the VLN agent, a dual-view Visual Prompt (VP) technique is introduced in the input space, which can also improve the agent's understanding of current spatial states. Subsequently, a novel step-level Reinforcement Fine-Tuning (RFT) method, Step Reward Group Policy Optimization (SRGPO), is designed for the post-training of VLN agents. In SRGPO, we first define verifiable process rewards for the navigation task, and then perform efficient step-level advantage estimation by randomly grouping different navigation steps. SRGPO provides dense reward signals for the reinforcement learning process of the VLN agent and enhances its planning capability. Experimental results on the EmbodiedBench Navigation benchmark indicate that by introducing the zero-shot VP module, the GPT-4.1 achieves a navigation success rate of 86.7%, surpassing the current best LVLM by approximately 20 percentage points (pp). Through post-training based on SRGPO, the Qwen2.5-VL-3B model reaches a navigation success rate of 72.3%, outperforming the best existing LVLM model by 5.6 pp. Moreover, compared to RFT algorithms such as GRPO and GiGPO, the proposed SRGPO demonstrates significant improvements in training stability, convergence efficiency, and generalization capability.
Problem

Research questions and friction points this paper is trying to address.

Reduces perception hallucinations in Vision-Language Navigation agents
Enhances planning capability with step-level reinforcement fine-tuning
Improves navigation success rates and training stability in VLN
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-view visual prompt reduces perception hallucinations
Step-level reinforcement fine-tuning enhances planning capability
Random step grouping provides dense reward signals
🔎 Similar Papers
No similar papers found.