๐ค AI Summary
This work addresses the challenge of generating high-quality synthetic tabular data under low-data, class-imbalanced, and distribution-shift settings, where existing generative models often fail to accurately capture the full joint distribution, resulting in limited utility for downstream tasks. To overcome this, the authors propose a reinforcement learningโbased approach that shifts focus from modeling the entire joint distribution to preserving the conditional distribution $P(y|\mathbf{X})$, which is most critical for predictive performance. Leveraging a language model as the generator and incorporating a reinforcement learning feedback loop, the method dynamically optimizes the retention of feature-target relationships while allowing flexible integration of expert-defined constraints. Extensive experiments demonstrate that the proposed framework consistently outperforms state-of-the-art baselines across multiple low-resource benchmarks, offering strong practicality, controllability, and scalability.
๐ Abstract
Deep generative models can help with data scarcity and privacy by producing synthetic training data, but they struggle in low-data, imbalanced tabular settings to fully learn the complex data distribution. We argue that striving for the full joint distribution could be overkill; for greater data efficiency, models should prioritize learning the conditional distribution $P(y\mid \bm{X})$, as suggested by recent theoretical analysis. Therefore, we overcome this limitation with \textbf{ReTabSyn}, a \textbf{Re}inforced \textbf{Tab}ular \textbf{Syn}thesis pipeline that provides direct feedback on feature correlation preservation during synthesizer training. This objective encourages the generator to prioritize the most useful predictive signals when training data is limited, thereby strengthening downstream model utility. We empirically fine-tune a language model-based generator using this approach, and across benchmarks with small sample sizes, class imbalance, and distribution shift, ReTabSyn consistently outperforms state-of-the-art baselines. Moreover, our approach can be readily extended to control various aspects of synthetic tabular data, such as applying expert-specified constraints on generated observations.