🤖 AI Summary
Existing 3D generative models predominantly employ VAEs to learn fixed-length latent tokens, overlooking the inherent variability in shape scale and geometric complexity—leading to representation redundancy and constrained generation quality. To address this, we propose an octree-guided adaptive tokenization framework, the first to incorporate explicit geometric structure awareness into latent modeling. Our method dynamically generates variable-length token sequences via quadric-error-driven octree subdivision, and introduces a query-based Transformer for efficient, geometry-aware latent encoding. Retaining the autoregressive VAE architecture, our approach significantly improves latent representation efficiency: token count is reduced by 50%, while generation fidelity and diversity are markedly enhanced under equivalent computational budgets. This establishes a new paradigm for scalable, high-fidelity 3D generation.
📝 Abstract
Many 3D generative models rely on variational autoencoders (VAEs) to learn compact shape representations. However, existing methods encode all shapes into a fixed-size token, disregarding the inherent variations in scale and complexity across 3D data. This leads to inefficient latent representations that can compromise downstream generation. We address this challenge by introducing Octree-based Adaptive Tokenization, a novel framework that adjusts the dimension of latent representations according to shape complexity. Our approach constructs an adaptive octree structure guided by a quadric-error-based subdivision criterion and allocates a shape latent vector to each octree cell using a query-based transformer. Building upon this tokenization, we develop an octree-based autoregressive generative model that effectively leverages these variable-sized representations in shape generation. Extensive experiments demonstrate that our approach reduces token counts by 50% compared to fixed-size methods while maintaining comparable visual quality. When using a similar token length, our method produces significantly higher-quality shapes. When incorporated with our downstream generative model, our method creates more detailed and diverse 3D content than existing approaches.