π€ AI Summary
This work addresses the limitation of existing scene representations in lacking post-hoc re-observability, which hinders the recovery of spatial memory when initial observations miss key targets. To overcome this, the paper introduces the capability of βSpatial Recollection,β leveraging 3D Gaussian splatting to construct a persistent, revisitable, and continuous representation of both geometry and appearance. This representation is further enriched by integrating object-level scene graphs with semantic-level language fields, enabling zero-shot object localization and embodied reasoning. A hybrid exploration strategy jointly optimizes semantic task objectives and geometric coverage, significantly enhancing robustness and performance in embodied question answering and lifelong navigation tasks, thereby validating the effectiveness of the proposed framework.
π Abstract
Effective embodied exploration requires agents to accumulate and retain spatial knowledge over time. However, existing scene representations, such as discrete scene graphs or static view-based snapshots, lack \textit{post-hoc re-observability}. If an initial observation misses a target, the resulting memory omission is often irrecoverable. To bridge this gap, we propose \textbf{GSMem}, a zero-shot embodied exploration and reasoning framework built upon 3D Gaussian Splatting (3DGS). By explicitly parameterizing continuous geometry and dense appearance, 3DGS serves as a persistent spatial memory that endows the agent with \textit{Spatial Recollection}: the ability to render photorealistic novel views from optimal, previously unoccupied viewpoints. To operationalize this, GSMem employs a retrieval mechanism that simultaneously leverages parallel object-level scene graphs and semantic-level language fields. This complementary design robustly localizes target regions, enabling the agent to ``hallucinate'' optimal views for high-fidelity Vision-Language Model (VLM) reasoning. Furthermore, we introduce a hybrid exploration strategy that combines VLM-driven semantic scoring with a 3DGS-based coverage objective, balancing task-aware exploration with geometric coverage. Extensive experiments on embodied question answering and lifelong navigation demonstrate the robustness and effectiveness of our framework