🤖 AI Summary
This work addresses the challenge of achieving online, near-real-time open-vocabulary language–3D spatial alignment for AI agents within 3D Gaussian Splatting-based SLAM (3DGS-SLAM). We propose the first end-to-end framework that eliminates offline language feature preprocessing. Our method introduces a high-resolution CLIP embedding module (18 ms/frame), a two-stage lightweight online autoencoder (768→15 dimensions) preserving open-vocabulary generalization, and a color–language disentangled optimization mechanism. It tightly integrates 3D Gaussian Splatting, CLIP-based semantic representation, SLAM pose tracking, and differentiable rendering. Experiments demonstrate that our system surpasses existing offline state-of-the-art methods in language-guided 3D localization accuracy while accelerating inference by over 40×. To our knowledge, it is the first approach enabling dynamic, interactive, natural-language-driven 3D scene understanding and manipulation in real time.
📝 Abstract
To enable AI agents to interact seamlessly with both humans and 3D environments, they must not only perceive the 3D world accurately but also align human language with 3D spatial representations. While prior work has made significant progress by integrating language features into geometrically detailed 3D scene representations using 3D Gaussian Splatting (GS), these approaches rely on computationally intensive offline preprocessing of language features for each input image, limiting adaptability to new environments. In this work, we introduce Online Language Splatting, the first framework to achieve online, near real-time, open-vocabulary language mapping within a 3DGS-SLAM system without requiring pre-generated language features. The key challenge lies in efficiently fusing high-dimensional language features into 3D representations while balancing the computation speed, memory usage, rendering quality and open-vocabulary capability. To this end, we innovatively design: (1) a high-resolution CLIP embedding module capable of generating detailed language feature maps in 18ms per frame, (2) a two-stage online auto-encoder that compresses 768-dimensional CLIP features to 15 dimensions while preserving open-vocabulary capabilities, and (3) a color-language disentangled optimization approach to improve rendering quality. Experimental results show that our online method not only surpasses the state-of-the-art offline methods in accuracy but also achieves more than 40x efficiency boost, demonstrating the potential for dynamic and interactive AI applications.