🤖 AI Summary
Existing 3D-aware large models rely on pre-trained 3D detectors, lack interpretability, and require extensive point-level 3D annotations. Method: We propose the first explainable, video-driven 3D understanding framework that operates without point-level 3D supervision. It employs a reinforcement learning-guided two-stage grounding process—temporal segment selection followed by 2D bounding box prediction—integrated with SAM2 pixel-wise mask tracking and RGB-D-to-3D geometric-textural projection to jointly infer 3D structure and semantics directly from monocular RGB-D video. Contribution/Results: The framework supports open-vocabulary 3D visual question answering and generates step-by-step, evidence-backed reasoning traces. It significantly outperforms prior open-vocabulary methods across multiple benchmarks, requiring only task-level 2D box or textual labels for supervision—enabling high-fidelity, transparent 3D reconstruction and interpretable reasoning.
📝 Abstract
Currently, utilizing large language models to understand the 3D world is becoming popular. Yet existing 3D-aware LLMs act as black boxes: they output bounding boxes or textual answers without revealing how those decisions are made, and they still rely on pre-trained 3D detectors to supply object proposals. We introduce Scene-R1, a video-grounded framework that learns to reason about 3D scenes without any point-wise 3D instance supervision by pairing reinforcement-learning-driven reasoning with a two-stage grounding pipeline. In the temporal grounding stage, we explicitly reason about the video and select the video snippets most relevant to an open-ended query. In the subsequent image grounding stage, we analyze the image and predict the 2D bounding box. After that, we track the object using SAM2 to produce pixel-accurate masks in RGB frames, and project them back into 3D, thereby eliminating the need for 3D detector-based proposals while capturing fine geometry and material cues. Scene-R1 can also adapt to the 3D visual question answering task to answer free-form questions directly from video. Our training pipeline only needs task-level 2D boxes or textual labels without dense 3D point-wise labels. Scene-R1 surpasses existing open-vocabulary baselines on multiple datasets, while delivering transparent, step-by-step rationales. These results show that reinforcement-learning-based reasoning combined with RGB-D video alone offers a practical, annotation-efficient route to trustworthy 3D scene understanding.