OnlineSI: Taming Large Language Model for Online 3D Understanding and Grounding

📅 2026-01-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing approaches struggle to enable multimodal large language models to perform efficient online 3D spatial understanding and object localization in dynamic environments. To address this challenge, this work proposes OnlineSI, a novel framework that introduces the first online 3D scene understanding mechanism tailored for embodied intelligence. By incorporating a bounded-capacity spatial memory structure, OnlineSI continuously integrates geometric and semantic information from point clouds under streaming video input, effectively balancing long-term perception with real-time inference while preventing computational costs from accumulating over time. Experiments on two representative datasets demonstrate that the proposed method significantly advances online 3D understanding performance. Furthermore, the authors introduce the Fuzzy F1-Score metric to mitigate ambiguities arising from imprecise annotations.

Technology Category

Application Category

📝 Abstract
In recent years, researchers have increasingly been interested in how to enable Multimodal Large Language Models (MLLM) to possess spatial understanding and reasoning capabilities. However, most existing methods overlook the importance of the ability to continuously work in an ever-changing world, and lack the possibility of deployment on embodied systems in real-world environments. In this work, we introduce OnlineSI, a framework that can continuously improve its spatial understanding of its surroundings given a video stream. Our core idea is to maintain a finite spatial memory to retain past observations, ensuring the computation required for each inference does not increase as the input accumulates. We further integrate 3D point cloud information with semantic information, helping MLLM to better locate and identify objects in the scene. To evaluate our method, we introduce the Fuzzy $F_1$-Score to mitigate ambiguity, and test our method on two representative datasets. Experiments demonstrate the effectiveness of our method, paving the way towards real-world embodied systems.
Problem

Research questions and friction points this paper is trying to address.

spatial understanding
online 3D grounding
embodied systems
multimodal large language models
continuous perception
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online Spatial Understanding
Multimodal Large Language Model
3D Grounding
Spatial Memory
Embodied AI
🔎 Similar Papers
No similar papers found.