Team, Then Trim: An Assembly-Line LLM Framework for High-Quality Tabular Data Generation

πŸ“… 2026-02-04
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenges of high-quality tabular data synthesis, including class imbalance, selection bias, and low fidelity. The authors propose a β€œteam-based generation plus pruning” paradigm that models tabular synthesis as an assembly-line process: multiple domain-knowledge-guided specialized large language models collaboratively generate data components, which are then systematically evaluated and refined through a three-stage, plug-and-play quality control pipeline. This approach uniquely integrates multi-agent collaboration with structured quality assurance, significantly outperforming state-of-the-art methods on both synthetic and real-world datasets. The resulting synthetic data exhibit markedly improved fidelity and utility, effectively supporting downstream machine learning tasks.

Technology Category

Application Category

πŸ“ Abstract
While tabular data is fundamental to many real-world machine learning (ML) applications, acquiring high-quality tabular data is usually labor-intensive and expensive. Limited by the scarcity of observations, tabular datasets often exhibit critical deficiencies, such as class imbalance, selection bias, and low fidelity. To address these challenges, building on recent advances in Large Language Models (LLMs), this paper introduces Team-then-Trim (T$^2$), a framework that synthesizes high-quality tabular data through a collaborative team of LLMs, followed by a rigorous three-stage plug-in data quality control (QC) pipeline. In T$^2$, tabular data generation is conceptualized as a manufacturing process: specialized LLMs, guided by domain knowledge, are tasked with generating different data components sequentially, and the resulting products, i.e., the synthetic data, are systematically evaluated across multiple dimensions of QC. Empirical results on both simulated and real-world datasets demonstrate that T$^2$ outperforms state-of-the-art methods in producing high-quality tabular data, highlighting its potential to support downstream models when direct data collection is practically infeasible.
Problem

Research questions and friction points this paper is trying to address.

tabular data
data quality
class imbalance
selection bias
low fidelity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
Tabular Data Generation
Data Quality Control
Collaborative LLM Framework
Synthetic Data
πŸ”Ž Similar Papers
No similar papers found.
C
Congjing Zhang
Department of Industrial & Systems Engineering, University of Washington
R
Ryan Feng Lin
Department of Industrial & Systems Engineering, University of Washington
R
Ruoxuan Bao
Department of Management, Shanghai University
Shuai Huang
Shuai Huang
University of Washington
Statistical Modeling and AnalysisMachine LearningHealthcareManufacturing