WalkGPT: Grounded Vision-Language Conversation with Depth-Aware Segmentation for Pedestrian Navigation

📅 2026-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing large vision-language models in pedestrian navigation, which often lack explicit visual grounding, suffer from hallucination, and exhibit unreliable deep reasoning, thereby failing to deliver accurate accessibility guidance. To overcome these challenges, we propose WalkGPT—the first end-to-end unified architecture tailored for pedestrian navigation—that integrates language reasoning, pixel-level segmentation, and relative depth estimation to enable fine-grained visual grounding and depth-aware natural language navigation instructions. Key innovations include the Grounded Navigation Guide task, a Multi-Scale Query Projector (MSQP), a Calibrated Text Projector (CTP), and a region-alignment loss, collectively enabling precise visual localization without requiring user prompts. Evaluated on our newly curated 41K-scale PAVE benchmark, WalkGPT significantly advances grounding-based reasoning and segmentation performance, yielding more realistic and comprehensive accessibility navigation instructions.

Technology Category

Application Category

📝 Abstract
Ensuring accessible pedestrian navigation requires reasoning about both semantic and spatial aspects of complex urban scenes, a challenge that existing Large Vision-Language Models (LVLMs) struggle to meet. Although these models can describe visual content, their lack of explicit grounding leads to object hallucinations and unreliable depth reasoning, limiting their usefulness for accessibility guidance. We introduce WalkGPT, a pixel-grounded LVLM for the new task of Grounded Navigation Guide, unifying language reasoning and segmentation within a single architecture for depth-aware accessibility guidance. Given a pedestrian-view image and a navigation query, WalkGPT generates a conversational response with segmentation masks that delineate accessible and harmful features, along with relative depth estimation. The model incorporates a Multi-Scale Query Projector (MSQP) that shapes the final image tokens by aggregating them along text tokens across spatial hierarchies, and a Calibrated Text Projector (CTP), guided by a proposed Region Alignment Loss, that maps language embeddings into segmentation-aware representations. These components enable fine-grained grounding and depth inference without user-provided cues or anchor points, allowing the model to generate complete and realistic navigation guidance. We also introduce PAVE, a large-scale benchmark of 41k pedestrian-view images paired with accessibility-aware questions and depth-grounded answers. Experiments show that WalkGPT achieves strong grounded reasoning and segmentation performance. The source code and dataset are available on the \href{https://sites.google.com/view/walkgpt-26/home}{project website}.
Problem

Research questions and friction points this paper is trying to address.

grounded vision-language models
pedestrian navigation
depth-aware segmentation
object hallucination
accessibility guidance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Grounded Vision-Language Model
Depth-Aware Segmentation
Multi-Scale Query Projector
Region Alignment Loss
Pedestrian Navigation
🔎 Similar Papers
No similar papers found.