🤖 AI Summary
To address the coarse-grained modeling, low generation efficiency, and weak semantic understanding of autoregressive models in 3D generation and comprehension, this paper proposes a multi-scale 3D vector-quantized variational autoencoder (VQ-VAE). It hierarchically discretizes 3D shapes into scale sequences—rather than flat token sequences—and introduces the novel “predict-the-next-scale” autoregressive paradigm, replacing conventional token-level prediction. We further design a 3D-aware tokenization scheme and a cross-modal LLM fine-tuning strategy to enable hierarchical semantic understanding and precise multimodal (text-to-3D) generation. Our method generates high-fidelity 3D models in just 0.82 seconds on an A6000 GPU—outperforming state-of-the-art approaches in both speed and quality. Notably, it is the first to empower LLMs to produce fine-grained, interpretable natural-language descriptions of 3D structural geometry.
📝 Abstract
Autoregressive models have demonstrated remarkable success across various fields, from large language models (LLMs) to large multimodal models (LMMs) and 2D content generation, moving closer to artificial general intelligence (AGI). Despite these advances, applying autoregressive approaches to 3D object generation and understanding remains largely unexplored. This paper introduces Scale AutoRegressive 3D (SAR3D), a novel framework that leverages a multi-scale 3D vector-quantized variational autoencoder (VQVAE) to tokenize 3D objects for efficient autoregressive generation and detailed understanding. By predicting the next scale in a multi-scale latent representation instead of the next single token, SAR3D reduces generation time significantly, achieving fast 3D object generation in just 0.82 seconds on an A6000 GPU. Additionally, given the tokens enriched with hierarchical 3D-aware information, we finetune a pretrained LLM on them, enabling multimodal comprehension of 3D content. Our experiments show that SAR3D surpasses current 3D generation methods in both speed and quality and allows LLMs to interpret and caption 3D models comprehensively.