🤖 AI Summary
This work addresses the limitations of existing spatial vision-language models, which predominantly rely on offline evaluation and struggle to support long-horizon streaming inference and active perception. To bridge this gap, the authors introduce S3-Bench, a novel benchmark featuring a dual-domain evaluation framework that integrates controllable simulation with real-world streaming video, enabling timestamp-based streaming spatial question answering and active exploration. They further propose AMF-VLM, a model that compresses long-term observations via a memory folding mechanism and incorporates an explicit exploration policy to generate navigation actions—including movement, rotation, and scanning. Experiments demonstrate that the proposed approach achieves performance gains of 8.8% and 13.3% on the S3-Eval simulation and real-world subsets, respectively, while maintaining strong transferability on standard spatial reasoning benchmarks.
📝 Abstract
Spatial understanding is fundamental for embodied agents, yet most spatial VLMs and benchmarks remain offline-evaluating post-hoc QA over pre-recorded inputs and overlooking two crucial deployment-critical requirements: long-horizon streaming inference and active perception when the current view is insufficient. To address this gap, we introduce S3-Bench, a benchmark suite for streaming spatial question answering with active exploration, where queries are temporally grounded to specific timestamps and must be answered using only observations available up to that moment. S3-Bench adopts a dual-domain design, combining a scalable simulator with controllable trajectories and exploration actions, and real-world streaming videos that capture practical sensing artifacts for rigorous generalization evaluation. Overall, it spans 10K+ scenes and 26K+ trajectories, with dedicated training (S3-Train) and evaluation (S3-Eval) splits. We further propose AMF-VLM, which supports streaming spatial reasoning under bounded computing via (i) memory folding, which compresses long-horizon observations into compact structured memory, and (ii) active exploration, which outputs explicit actions (e.g. move/rotate/scan) to acquire missing evidence before answering. Extensive experiments demonstrate that, compared to models using identical training data, our approach yields improvements of 8.8% and 13.3% on the simulated and real splits of S3-Eval, respectively, while maintaining competitive transferability to standard spatial benchmarks.