🤖 AI Summary
Existing active mapping approaches fail to effectively integrate multimodal large language models (MLLMs) and neglect the critical impact of localization uncertainty on embodied agents. This paper proposes the first semantic-driven active mapping framework tailored for embodied agents: an MLLM serves as a zero-shot semantic planner enabling long-horizon, task-oriented exploration decisions; concurrently, a short-horizon action optimization mechanism—grounded in 3D Gaussian Splatting (3DGS) representation and Fisher information theory—explicitly accounts for localization uncertainty, jointly maximizing environmental information gain while minimizing pose estimation error. Evaluated on the Gibson and Habitat-Matterport benchmarks, our method achieves state-of-the-art performance, improving mapping completeness by +12.7% and enhancing localization robustness with a 23.4% reduction in pose estimation error.
📝 Abstract
We present an active mapping system that could plan for long-horizon exploration goals and short-term actions with a 3D Gaussian Splatting (3DGS) representation. Existing methods either did not take advantage of recent developments in multimodal Large Language Models (LLM) or did not consider challenges in localization uncertainty, which is critical in embodied agents. We propose employing multimodal LLMs for long-horizon planning in conjunction with detailed motion planning using our information-based algorithm. By leveraging high-quality view synthesis from our 3DGS representation, our method employs a multimodal LLM as a zero-shot planner for long-horizon exploration goals from the semantic perspective. We also introduce an uncertainty-aware path proposal and selection algorithm that balances the dual objectives of maximizing the information gain for the environment while minimizing the cost of localization errors. Experiments conducted on the Gibson and Habitat-Matterport 3D datasets demonstrate state-of-the-art results of the proposed method.