๐ค AI Summary
Existing tabular diffusion models often inherit sensitive attribute biases (e.g., gender, race) from training data, leading to unfair synthetic data. To address this, we propose the first fairness-aware tabular diffusion framework. Our method introduces a novel sensitive-attribute guidance mechanism that explicitly balances the joint distribution of target labels and sensitive attributes during sampling via conditional gradient guidance. We further integrate a unified embedding encoder for mixed-type features and a fairness-regularized loss to jointly optimize statistical fidelity and downstream utility. Evaluated on multiple benchmark datasets, our approach achieves average improvements of over 10% in demographic parity ratio and equal opportunity ratioโsubstantially outperforming state-of-the-art baselines while preserving data quality and model performance.
๐ Abstract
Diffusion models have emerged as a robust framework for various generative tasks, including tabular data synthesis. However, current tabular diffusion models tend to inherit bias in the training dataset and generate biased synthetic data, which may influence discriminatory actions. In this research, we introduce a novel tabular diffusion model that incorporates sensitive guidance to generate fair synthetic data with balanced joint distributions of the target label and sensitive attributes, such as sex and race. The empirical results demonstrate that our method effectively mitigates bias in training data while maintaining the quality of the generated samples. Furthermore, we provide evidence that our approach outperforms existing methods for synthesizing tabular data on fairness metrics such as demographic parity ratio and equalized odds ratio, achieving improvements of over $10%$. Our implementation is available at https://github.com/comp-well-org/fair-tab-diffusion.