๐ค AI Summary
Existing LLM tokenizers segment CAD sequences into natural-language subwords, discarding geometric structural semantics and hindering attention mechanisms from effectively modeling primitive operations. To address this, we propose CAD-Tokenizerโthe first modality-specific tokenization framework designed explicitly for CAD primitives. It employs a serialized Vector Quantized Variational Autoencoder (VQ-VAE) to achieve primitive-level pooling and incorporates geometrically constrained decoding to produce compact, structure-aware CAD representations. The framework unifies text-to-3D generation and editing tasks and is end-to-end compatible with large language model paradigms. Experiments demonstrate that CAD-Tokenizer significantly outperforms both general-purpose LLMs and specialized baselines on text-to-CAD generation and editing, achieving state-of-the-art performance in both quantitative metrics and visual fidelity.
๐ Abstract
Computer-Aided Design (CAD) is a foundational component of industrial prototyping, where models are defined not by raw coordinates but by construction sequences such as sketches and extrusions. This sequential structure enables both efficient prototype initialization and subsequent editing. Text-guided CAD prototyping, which unifies Text-to-CAD generation and CAD editing, has the potential to streamline the entire design pipeline. However, prior work has not explored this setting, largely because standard large language model (LLM) tokenizers decompose CAD sequences into natural-language word pieces, failing to capture primitive-level CAD semantics and hindering attention modules from modeling geometric structure. We conjecture that a multimodal tokenization strategy, aligned with CAD's primitive and structural nature, can provide more effective representations. To this end, we propose CAD-Tokenizer, a framework that represents CAD data with modality-specific tokens using a sequence-based VQ-VAE with primitive-level pooling and constrained decoding. This design produces compact, primitive-aware representations that align with CAD's structural nature. Applied to unified text-guided CAD prototyping, CAD-Tokenizer significantly improves instruction following and generation quality, achieving better quantitative and qualitative performance over both general-purpose LLMs and task-specific baselines.