IRef-VLA: A Benchmark for Interactive Referential Grounding with Imperfect Language in 3D Scenes

πŸ“… 2025-03-20
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses robust indoor navigation under ambiguous, erroneous, or scene-mismatched natural language instructions. To this end, we introduce NaviRef3Dβ€”the first 3D referring expression localization benchmark explicitly designed for imperfect language instructions. It comprises 11.5K real-scanned rooms, 7.6M semantic relations, and 4.7M noisy/ambiguous referring expressions, systematically modeling linguistic imperfections for the first time. The benchmark integrates scene graphs, traversable space maps, and fine-grained semantic annotations. Methodologically, we propose a graph-search baseline that generates multiple candidate navigation actions per instruction. Extensive experiments establish state-of-the-art performance baselines and validate strong generalization across diverse language imperfections. All data and code are publicly released to advance research in 3D language grounding and interactive navigation.

Technology Category

Application Category

πŸ“ Abstract
With the recent rise of large language models, vision-language models, and other general foundation models, there is growing potential for multimodal, multi-task robotics that can operate in diverse environments given natural language input. One such application is indoor navigation using natural language instructions. However, despite recent progress, this problem remains challenging due to the 3D spatial reasoning and semantic understanding required. Additionally, the language used may be imperfect or misaligned with the scene, further complicating the task. To address this challenge, we curate a benchmark dataset, IRef-VLA, for Interactive Referential Vision and Language-guided Action in 3D Scenes with imperfect references. IRef-VLA is the largest real-world dataset for the referential grounding task, consisting of over 11.5K scanned 3D rooms from existing datasets, 7.6M heuristically generated semantic relations, and 4.7M referential statements. Our dataset also contains semantic object and room annotations, scene graphs, navigable free space annotations, and is augmented with statements where the language has imperfections or ambiguities. We verify the generalizability of our dataset by evaluating with state-of-the-art models to obtain a performance baseline and also develop a graph-search baseline to demonstrate the performance bound and generation of alternatives using scene-graph knowledge. With this benchmark, we aim to provide a resource for 3D scene understanding that aids the development of robust, interactive navigation systems. The dataset and all source code is publicly released at https://github.com/HaochenZ11/IRef-VLA.
Problem

Research questions and friction points this paper is trying to address.

Benchmark for 3D scene navigation with imperfect language
Addresses challenges in referential grounding and spatial reasoning
Largest dataset for interactive vision-language action in 3D
Innovation

Methods, ideas, or system contributions that make the work stand out.

Benchmark dataset for 3D referential grounding
Heuristically generated semantic relations
Graph-search baseline with scene-graph knowledge
πŸ”Ž Similar Papers
No similar papers found.