🤖 AI Summary
Existing 3D datasets lack fine-grained, multimodal, part-level segmentation capabilities tailored for robotic navigation and real-world interaction. To address the limited part-level spatial understanding in 3D multimodal large language models (MLLMs), we propose a novel part-aware point cloud–instruction–mask alignment paradigm and introduce the first Part-Aware Point Grounding & Grounded Captioning task. We release 3DCoMPaT-GRIN—the first large-scale, part-level 3D instruction grounding dataset—containing 896K samples. Furthermore, we design an end-to-end framework jointly training a point cloud encoder and a multimodal LLM, integrating instruction tuning with mask regression while explicitly modeling part-level spatial–semantic alignment. Our method enables direct generation of part-level masks from natural language instructions on 3DCoMPaT-GRIN, significantly outperforming existing 3D MLLMs. Finally, we establish the first benchmark for part-level 3D vision–language understanding.
📝 Abstract
While 3D MLLMs have achieved significant progress, they are restricted to object and scene understanding and struggle to understand 3D spatial structures at the part level. In this paper, we introduce Kestrel, representing a novel approach that empowers 3D MLLMs with part-aware understanding, enabling better interpretation and segmentation grounding of 3D objects at the part level. Despite its significance, the current landscape lacks tasks and datasets that endow and assess this capability. Therefore, we propose two novel tasks: (1) Part-Aware Point Grounding, the model is tasked with directly predicting a part-level segmentation mask based on user instructions, and (2) Part-Aware Point Grounded Captioning, the model provides a detailed caption that includes part-level descriptions and their corresponding masks. To support learning and evaluating for these tasks, we introduce 3DCoMPaT Grounded Instructions Dataset (3DCoMPaT-GRIN). 3DCoMPaT-GRIN Vanilla, comprising 789k part-aware point cloud-instruction-segmentation mask triplets, is used to evaluate MLLMs' ability of part-aware segmentation grounding. 3DCoMPaT-GRIN Grounded Caption, containing 107k part-aware point cloud-instruction-grounded caption triplets, assesses both MLLMs' part-aware language comprehension and segmentation grounding capabilities. Our introduced tasks, dataset, and Kestrel represent a preliminary effort to bridge the gap between human cognition and 3D MLLMs, i.e., the ability to perceive and engage with the environment at both global and part levels. Extensive experiments on the 3DCoMPaT-GRIN show that Kestrel can generate user-specified segmentation masks, a capability not present in any existing 3D MLLM. Kestrel thus established a benchmark for evaluating the part-aware language comprehension and segmentation grounding of 3D objects. Project page at https://feielysia.github.io/Kestrel.github.io/