REALM: An MLLM-Agent Framework for Open World 3D Reasoning Segmentation and Editing on Gaussian Splatting

๐Ÿ“… 2025-10-18
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing methods struggle to bridge the semantic gap between complex natural language instructions and precise 3D object localization in open-world settings, failing to jointly achieve strong reasoning capability and fine-grained 3D spatial understanding. This paper introduces MLLM-Agent, a novel framework that integrates multimodal large language models (MLLMs) with 3D Gaussian Splatting representations to enable inference-based 3D segmentation and editingโ€”without requiring any 3D-specific post-training. Its core innovation is a global-to-local spatial localization strategy that mitigates viewpoint sensitivity; by rendering multi-view images from Gaussian Splatting and feeding them into an MLLM for parallel reasoning, the framework supports ambiguous, reasoning-intensive instructions for object localization, removal, replacement, and style transfer. It achieves state-of-the-art performance on LERF, 3D-OVS, and REALM3D benchmarks, marking the first end-to-end editable 3D vision-language understanding system for open-domain scenarios.

Technology Category

Application Category

๐Ÿ“ Abstract
Bridging the gap between complex human instructions and precise 3D object grounding remains a significant challenge in vision and robotics. Existing 3D segmentation methods often struggle to interpret ambiguous, reasoning-based instructions, while 2D vision-language models that excel at such reasoning lack intrinsic 3D spatial understanding. In this paper, we introduce REALM, an innovative MLLM-agent framework that enables open-world reasoning-based segmentation without requiring extensive 3D-specific post-training. We perform segmentation directly on 3D Gaussian Splatting representations, capitalizing on their ability to render photorealistic novel views that are highly suitable for MLLM comprehension. As directly feeding one or more rendered views to the MLLM can lead to high sensitivity to viewpoint selection, we propose a novel Global-to-Local Spatial Grounding strategy. Specifically, multiple global views are first fed into the MLLM agent in parallel for coarse-level localization, aggregating responses to robustly identify the target object. Then, several close-up novel views of the object are synthesized to perform fine-grained local segmentation, yielding accurate and consistent 3D masks. Extensive experiments show that REALM achieves remarkable performance in interpreting both explicit and implicit instructions across LERF, 3D-OVS, and our newly introduced REALM3D benchmarks. Furthermore, our agent framework seamlessly supports a range of 3D interaction tasks, including object removal, replacement, and style transfer, demonstrating its practical utility and versatility. Project page: https://ChangyueShi.github.io/REALM.
Problem

Research questions and friction points this paper is trying to address.

Bridging complex human instructions with precise 3D object grounding
Enabling reasoning-based 3D segmentation without extensive post-training
Overcoming viewpoint sensitivity in 3D spatial understanding tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses MLLM-agent framework for open-world 3D reasoning
Segments directly on 3D Gaussian Splatting representations
Employs Global-to-Local Spatial Grounding strategy
๐Ÿ”Ž Similar Papers
No similar papers found.
C
Changyue Shi
Peking University
M
Minghao Chen
Hangzhou Dianzi University
Y
Yiping Mao
Hangzhou Dianzi University
C
Chuxiao Yang
Hangzhou Dianzi University
Xinyuan Hu
Xinyuan Hu
Undergrad at Emory University
AILLM
Z
Zhijie Wang
Hangzhou Dianzi University
J
Jiajun Ding
Hangzhou Dianzi University
Z
Zhou Yu
Hangzhou Dianzi University