Words into World: A Task-Adaptive Agent for Language-Guided Spatial Retrieval in AR

📅 2025-11-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional AR systems are constrained by fixed-category detectors or fiducial markers, limiting support for open-vocabulary natural language queries and complex spatial relation understanding. This paper proposes a modular AR agent system that integrates multimodal large language models (MLLMs) with embodied vision models, introducing a novel task-adaptive mechanism and a dynamic AR scene graph encoding nine spatial and semantic relations—enabling plug-and-play multimodal fusion without retraining. The system supports language-driven multi-object relational reasoning, physics-aware manipulation coupling, and meter-level-accurate 3D anchor localization, offering callable functions including recognition, measurement, comparison, selection, and execution. To evaluate performance, we introduce GroundedAR-Bench—a new benchmark validating spatial grounding and relational reasoning across diverse environments. Our approach achieves, for the first time, open-domain, language-guided spatial retrieval and closed-loop interaction in AR.

Technology Category

Application Category

📝 Abstract
Traditional augmented reality (AR) systems predominantly rely on fixed class detectors or fiducial markers, limiting their ability to interpret complex, open-vocabulary natural language queries. We present a modular AR agent system that integrates multimodal large language models (MLLMs) with grounded vision models to enable relational reasoning in space and language-conditioned spatial retrieval in physical environments. Our adaptive task agent coordinates MLLMs and coordinate-aware perception tools to address varying query complexities, ranging from simple object identification to multi-object relational reasoning, while returning meter-accurate 3D anchors. It constructs dynamic AR scene graphs encoding nine typed relations (spatial, structural-semantic, causal-functional), enabling MLLMs to understand not just what objects exist, but how they relate and interact in 3D space. Through task-adaptive region-of-interest highlighting and contextual spatial retrieval, the system guides human attention to information-dense areas while supporting human-in-the-loop refinement. The agent dynamically invokes coordinate-aware tools for complex queries-selection, measurement, comparison, and actuation-grounding language understanding in physical operations. The modular architecture supports plug-and-use vision-language models without retraining, establishing AR agents as intermediaries that augment MLLMs with real-world spatial intelligence for interactive scene understanding. We also introduce GroundedAR-Bench, an evaluation framework for language-driven real world localization and relation grounding across diverse environments.
Problem

Research questions and friction points this paper is trying to address.

Interpreting complex open-vocabulary natural language queries in AR systems.
Enabling relational reasoning and spatial retrieval in physical environments.
Supporting task-adaptive human-in-the-loop refinement for interactive scene understanding.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates MLLMs with vision models for spatial reasoning
Constructs dynamic AR scene graphs with nine typed relations
Uses modular architecture for plug-and-use models without retraining
🔎 Similar Papers
No similar papers found.