🤖 AI Summary
This work addresses the challenge of high-quality structured data synthesis by large language models (LLMs). We propose Tabby, a column-aware architecture derived from standard Transformers, specifically designed for table generation. Our key contributions are threefold: (1) a novel column-specialized parameterization mechanism based on gated mixture-of-experts (Gated MoE), enabling fine-grained modeling of column-level heterogeneity; (2) a lightweight pretraining paradigm—Plain tabular pretraining—that eschews complex prompting or task-specific fine-tuning; and (3) seamless generalization to nested JSON structures. Experiments demonstrate that Tabby achieves, for the first time, synthetic tabular data quality on par with real-world data—outperforming state-of-the-art methods by up to 44% in fidelity metrics. Moreover, it maintains competitive performance on nested JSON synthesis. This work establishes a new paradigm for LLM-driven structured data generation.
📝 Abstract
While advances in large language models (LLMs) have greatly improved the quality of synthetic text data in recent years, synthesizing tabular data has received relatively less attention. We address this disparity with Tabby, a simple but powerful post-training modification to the standard Transformer language model architecture, enabling its use for tabular dataset synthesis. Tabby enables the representation of differences across columns using Gated Mixture-of-Experts, with column-specific sets of parameters. Empirically, Tabby results in data quality near or equal to that of real data. By pairing our novel LLM table training technique, Plain, with Tabby, we observe up to a 44% improvement in quality over previous methods. We also show that Tabby extends beyond tables to more general structured data, reaching parity with real data on a nested JSON dataset as well.