GSMem: 3D Gaussian Splatting as Persistent Spatial Memory for Zero-Shot Embodied Exploration and Reasoning

πŸ“… 2026-03-19
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the limitation of existing scene representations in lacking post-hoc re-observability, which hinders the recovery of spatial memory when initial observations miss key targets. To overcome this, the paper introduces the capability of β€œSpatial Recollection,” leveraging 3D Gaussian splatting to construct a persistent, revisitable, and continuous representation of both geometry and appearance. This representation is further enriched by integrating object-level scene graphs with semantic-level language fields, enabling zero-shot object localization and embodied reasoning. A hybrid exploration strategy jointly optimizes semantic task objectives and geometric coverage, significantly enhancing robustness and performance in embodied question answering and lifelong navigation tasks, thereby validating the effectiveness of the proposed framework.

Technology Category

Application Category

πŸ“ Abstract
Effective embodied exploration requires agents to accumulate and retain spatial knowledge over time. However, existing scene representations, such as discrete scene graphs or static view-based snapshots, lack \textit{post-hoc re-observability}. If an initial observation misses a target, the resulting memory omission is often irrecoverable. To bridge this gap, we propose \textbf{GSMem}, a zero-shot embodied exploration and reasoning framework built upon 3D Gaussian Splatting (3DGS). By explicitly parameterizing continuous geometry and dense appearance, 3DGS serves as a persistent spatial memory that endows the agent with \textit{Spatial Recollection}: the ability to render photorealistic novel views from optimal, previously unoccupied viewpoints. To operationalize this, GSMem employs a retrieval mechanism that simultaneously leverages parallel object-level scene graphs and semantic-level language fields. This complementary design robustly localizes target regions, enabling the agent to ``hallucinate'' optimal views for high-fidelity Vision-Language Model (VLM) reasoning. Furthermore, we introduce a hybrid exploration strategy that combines VLM-driven semantic scoring with a 3DGS-based coverage objective, balancing task-aware exploration with geometric coverage. Extensive experiments on embodied question answering and lifelong navigation demonstrate the robustness and effectiveness of our framework
Problem

Research questions and friction points this paper is trying to address.

embodied exploration
spatial memory
post-hoc re-observability
3D scene representation
zero-shot reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

3D Gaussian Splatting
Spatial Recollection
Zero-Shot Embodied Reasoning
Persistent Spatial Memory
Vision-Language Model
πŸ”Ž Similar Papers
No similar papers found.