Part-X-MLLM: Part-aware 3D Multimodal Large Language Model

📅 2025-11-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing 3D multimodal large language models lack native support for unified 3D understanding and generation, particularly under joint RGB-point-cloud and natural-language inputs, where structured, executable instruction representations remain absent. Method: We propose the first native 3D multimodal LLM, introducing a part-level structured grammar that encodes tasks as coherent token sequences comprising bounding boxes, semantic descriptions, and editing commands. Our approach employs dual-encoder pretraining to decouple geometric structure from semantics, followed by instruction tuning on a large-scale part-centric dataset. Crucially, we decouple symbolic planning from geometric synthesis, enabling the language frontend to universally control heterogeneous geometric engines. Contribution/Results: The model achieves state-of-the-art performance on grounded visual question answering, compositional 3D generation, and localized editing—demonstrating its capability to generate high-fidelity, executable 3D programs.

Technology Category

Application Category

📝 Abstract
We introduce Part-X-MLLM, a native 3D multimodal large language model that unifies diverse 3D tasks by formulating them as programs in a structured, executable grammar. Given an RGB point cloud and a natural language prompt, our model autoregressively generates a single, coherent token sequence encoding part-level bounding boxes, semantic descriptions, and edit commands. This structured output serves as a versatile interface to drive downstream geometry-aware modules for part-based generation and editing. By decoupling the symbolic planning from the geometric synthesis, our approach allows any compatible geometry engine to be controlled through a single, language-native frontend. We pre-train a dual-encoder architecture to disentangle structure from semantics and instruction-tune the model on a large-scale, part-centric dataset. Experiments demonstrate that our model excels at producing high-quality, structured plans, enabling state-of-the-art performance in grounded Q&A, compositional generation, and localized editing through one unified interface. Project page: https://chunshi.wang/Part-X-MLLM/
Problem

Research questions and friction points this paper is trying to address.

Unifying diverse 3D tasks through structured executable grammar
Generating part-level bounding boxes and semantic descriptions from prompts
Enabling geometry-aware part-based generation and editing via language interface
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unifies 3D tasks via structured executable grammar
Generates part-level boxes and commands in one sequence
Decouples symbolic planning from geometric synthesis engines
🔎 Similar Papers
No similar papers found.
C
Chunshi Wang
Zhejiang University, Tencent Hunyuan
Junliang Ye
Junliang Ye
Tsinghua University
Computer Vision3D VisionMachine LearningAI4SCI
Y
Yunhan Yang
Tencent Hunyuan, The University of Hong Kong
Y
Yang Li
Tencent Hunyuan
Z
Zizhuo Lin
Zhejiang University
J
Jun Zhu
Tsinghua University
Z
Zhuo Chen
Tencent Hunyuan
Y
Yawei Luo
Zhejiang University
C
Chunchao Guo
Tencent Hunyuan