ReTabSyn: Realistic Tabular Data Synthesis via Reinforcement Learning

๐Ÿ“… 2026-03-11
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the challenge of generating high-quality synthetic tabular data under low-data, class-imbalanced, and distribution-shift settings, where existing generative models often fail to accurately capture the full joint distribution, resulting in limited utility for downstream tasks. To overcome this, the authors propose a reinforcement learningโ€“based approach that shifts focus from modeling the entire joint distribution to preserving the conditional distribution $P(y|\mathbf{X})$, which is most critical for predictive performance. Leveraging a language model as the generator and incorporating a reinforcement learning feedback loop, the method dynamically optimizes the retention of feature-target relationships while allowing flexible integration of expert-defined constraints. Extensive experiments demonstrate that the proposed framework consistently outperforms state-of-the-art baselines across multiple low-resource benchmarks, offering strong practicality, controllability, and scalability.

Technology Category

Application Category

๐Ÿ“ Abstract
Deep generative models can help with data scarcity and privacy by producing synthetic training data, but they struggle in low-data, imbalanced tabular settings to fully learn the complex data distribution. We argue that striving for the full joint distribution could be overkill; for greater data efficiency, models should prioritize learning the conditional distribution $P(y\mid \bm{X})$, as suggested by recent theoretical analysis. Therefore, we overcome this limitation with \textbf{ReTabSyn}, a \textbf{Re}inforced \textbf{Tab}ular \textbf{Syn}thesis pipeline that provides direct feedback on feature correlation preservation during synthesizer training. This objective encourages the generator to prioritize the most useful predictive signals when training data is limited, thereby strengthening downstream model utility. We empirically fine-tune a language model-based generator using this approach, and across benchmarks with small sample sizes, class imbalance, and distribution shift, ReTabSyn consistently outperforms state-of-the-art baselines. Moreover, our approach can be readily extended to control various aspects of synthetic tabular data, such as applying expert-specified constraints on generated observations.
Problem

Research questions and friction points this paper is trying to address.

tabular data synthesis
data scarcity
class imbalance
conditional distribution
synthetic data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement Learning
Tabular Data Synthesis
Conditional Distribution
Data Efficiency
Feature Correlation Preservation
๐Ÿ”Ž Similar Papers
No similar papers found.
Xiaofeng Lin
Xiaofeng Lin
PhD Candidate, Boston University
Sequential Decision MakingRobotics
S
Seungbae Kim
University of South Florida, USA
Z
Zhuoya Li
University of California, Los Angeles, USA
Z
Zachary DeSoto
University of California, Los Angeles, USA
C
Charles Fleming
Cisco, USA; University of Mississippi, USA
G
Guang Cheng
University of California, Los Angeles, USA