ShapeLLM-Omni: A Native Multimodal LLM for 3D Generation and Understanding

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
State-of-the-art multimodal large language models (MLLMs), such as GPT-4o, support only image and text modalities and lack native 3D understanding and generation capabilities. Method: We introduce the first native 3D-aware MLLM, built upon Qwen-2.5-VL-7B-Instruct and end-to-end fine-tuned using a novel 3D discrete latent representation—derived from a 3D VQ-VAE—and the large-scale instruction-following dataset 3D-Alpaca. This enables arbitrary-order, bidirectional interaction between text and 3D assets. Contribution/Results: Our model achieves state-of-the-art performance on high-fidelity 3D reconstruction, cross-modal 3D generation, and 3D editing tasks, significantly outperforming existing baselines. It establishes a new paradigm for extending MLLMs into the 3D domain, enabling native multimodal reasoning over geometric, visual, and linguistic information.

Technology Category

Application Category

📝 Abstract
Recently, the powerful text-to-image capabilities of ChatGPT-4o have led to growing appreciation for native multimodal large language models. However, its multimodal capabilities remain confined to images and text. Yet beyond images, the ability to understand and generate 3D content is equally crucial. To address this gap, we propose ShapeLLM-Omni-a native 3D large language model capable of understanding and generating 3D assets and text in any sequence. First, we train a 3D vector-quantized variational autoencoder (VQVAE), which maps 3D objects into a discrete latent space to achieve efficient and accurate shape representation and reconstruction. Building upon the 3D-aware discrete tokens, we innovatively construct a large-scale continuous training dataset named 3D-Alpaca, encompassing generation, comprehension, and editing, thus providing rich resources for future research and training. Finally, by performing instruction-based training of the Qwen-2.5-vl-7B-Instruct model on the 3D-Alpaca dataset. Our work provides an effective attempt at extending multimodal models with basic 3D capabilities, which contributes to future research in 3D-native AI. Project page: https://github.com/JAMESYJL/ShapeLLM-Omni
Problem

Research questions and friction points this paper is trying to address.

Extends multimodal LLMs to include 3D content understanding and generation
Develops a 3D VQVAE for efficient shape representation and reconstruction
Creates a 3D-Alpaca dataset for training 3D-native AI models
Innovation

Methods, ideas, or system contributions that make the work stand out.

3D vector-quantized variational autoencoder for shape representation
Large-scale 3D-Alpaca dataset for generation and comprehension
Instruction-based training on Qwen-2.5-vl-7B-Instruct model
🔎 Similar Papers
No similar papers found.