Under the Hood of Tabular Data Generation Models: Benchmarks with Extensive Tuning

📅 2024-06-18
📈 Citations: 1
Influential: 1
📄 PDF
🤖 AI Summary
This work addresses the challenge of simultaneously ensuring fidelity, security, and utility in tabular data generation. We systematically evaluate five mainstream generative models—diffusion models, GANs, VAEs, CTGAN, and TVAE—across 16 real-world datasets. To reduce tuning overhead while preserving near-optimal performance, we propose a model-specific, streamlined hyperparameter search space. Our analysis reveals that diffusion models’ advantages are highly sensitive to tuning intensity and vanish under constrained GPU budgets. We further introduce the first joint optimization of multiple encoding strategies (e.g., embedding and one-hot) with model architectures. Finally, we establish a standardized, reproducible benchmark framework: all models undergo dataset-specific hyperparameter tuning, yielding consistent improvements in generation quality; evaluation results and full tuning protocols are publicly released.

Technology Category

Application Category

📝 Abstract
The ability to train generative models that produce realistic, safe and useful tabular data is essential for data privacy, imputation, oversampling, explainability or simulation. However, generating tabular data is not straightforward due to its heterogeneity, non-smooth distributions, complex dependencies and imbalanced categorical features. Although diverse methods have been proposed in the literature, there is a need for a unified evaluation, under the same conditions, on a variety of datasets. This study addresses this need by fully considering the optimization of: hyperparameters, feature encodings, and architectures. We investigate the impact of dataset-specific tuning on five recent model families for tabular data generation through an extensive benchmark on 16 datasets. These datasets vary in terms of size (an average of 80,000 rows), data types, and domains. We also propose a reduced search space for each model that allows for quick optimization, achieving nearly equivalent performance at a significantly lower cost. Our benchmark demonstrates that, for most models, large-scale dataset-specific tuning substantially improves performance compared to the original configurations. Furthermore, we confirm that diffusion-based models generally outperform other models on tabular data. However, this advantage is not significant when the entire tuning and training process is restricted to the same GPU budget.
Problem

Research questions and friction points this paper is trying to address.

Evaluating tabular data generation models under unified conditions
Optimizing hyperparameters, encodings, and architectures for tabular data
Assessing performance impact of dataset-specific tuning across models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extensive hyperparameter and architecture optimization
Proposing reduced search space for efficient tuning
Benchmarking diffusion models under equal GPU constraints
🔎 Similar Papers
No similar papers found.
G
G. C. N. Kindji
Univ Rennes, IUF, Inria, CNRS, IRISA, Rennes, 35000, France
L
L. Rojas-Barahona
Orange Labs, Lannion, 22300, France
Elisa Fromont
Elisa Fromont
Professor, Université de Rennes, France
Data MiningMachine LearningComputer VisionTime Series Analysis
Tanguy Urvoy
Tanguy Urvoy
Orange
Machine learningReinforcement LearningGenerative Modeling