Tabby: Tabular Data Synthesis with Language Models

📅 2025-03-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of high-quality structured data synthesis by large language models (LLMs). We propose Tabby, a column-aware architecture derived from standard Transformers, specifically designed for table generation. Our key contributions are threefold: (1) a novel column-specialized parameterization mechanism based on gated mixture-of-experts (Gated MoE), enabling fine-grained modeling of column-level heterogeneity; (2) a lightweight pretraining paradigm—Plain tabular pretraining—that eschews complex prompting or task-specific fine-tuning; and (3) seamless generalization to nested JSON structures. Experiments demonstrate that Tabby achieves, for the first time, synthetic tabular data quality on par with real-world data—outperforming state-of-the-art methods by up to 44% in fidelity metrics. Moreover, it maintains competitive performance on nested JSON synthesis. This work establishes a new paradigm for LLM-driven structured data generation.

Technology Category

Application Category

📝 Abstract
While advances in large language models (LLMs) have greatly improved the quality of synthetic text data in recent years, synthesizing tabular data has received relatively less attention. We address this disparity with Tabby, a simple but powerful post-training modification to the standard Transformer language model architecture, enabling its use for tabular dataset synthesis. Tabby enables the representation of differences across columns using Gated Mixture-of-Experts, with column-specific sets of parameters. Empirically, Tabby results in data quality near or equal to that of real data. By pairing our novel LLM table training technique, Plain, with Tabby, we observe up to a 44% improvement in quality over previous methods. We also show that Tabby extends beyond tables to more general structured data, reaching parity with real data on a nested JSON dataset as well.
Problem

Research questions and friction points this paper is trying to address.

Synthesizing high-quality tabular data using language models.
Improving tabular data synthesis with Gated Mixture-of-Experts.
Extending Tabby to general structured data like nested JSON.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Post-training modification for tabular data synthesis
Gated Mixture-of-Experts for column-specific parameters
Combines Plain technique with Tabby for quality improvement
🔎 Similar Papers
No similar papers found.
Sonia Cromp
Sonia Cromp
PhD Student, University of Wisconsin-Madison
machine learningartificial intelligence
S
Satya Sai Srinath Namburi Gnvv
GE HealthCare
M
Mohammed Alkhudhayri
University of Wisconsin-Madison
C
Catherine Cao
University of Wisconsin-Madison
S
Samuel Guo
University of Wisconsin-Madison
Nicholas Roberts
Nicholas Roberts
PhD candidate UW-Madison
Machine LearningAutoMLdata-centric AI
Frederic Sala
Frederic Sala
Assistant Professor, University of Wisconsin
Data-centric AIMachine learningInformation theory