🤖 AI Summary
To address distributional shift and uncontrolled quality in synthetic data augmentation, this paper proposes the first conformal prediction–based framework for synthetic data selection. Without requiring access to model parameters or retraining, it enables theoretically guaranteed risk control under a black-box setting: conformal p-values quantify the consistency of generated samples with the original data distribution, enabling dynamic filtering of low-quality or high-bias instances. Compared to the no-augmentation baseline, our method achieves up to a 40% improvement in F1 score; against existing filtering-based augmentation approaches, it yields an average 4% gain, significantly enhancing model robustness and generalization. The core contribution lies in the first application of conformal prediction to quality control in data augmentation—uniquely bridging statistical rigor with engineering practicality.
📝 Abstract
With promising empirical performance across a wide range of applications, synthetic data augmentation appears a viable solution to data scarcity and the demands of increasingly data-intensive models. Its effectiveness lies in expanding the training set in a way that reduces estimator variance while introducing only minimal bias. Controlling this bias is therefore critical: effective data augmentation should generate diverse samples from the same underlying distribution as the training set, with minimal shifts. In this paper, we propose conformal data augmentation, a principled data filtering framework that leverages the power of conformal prediction to produce diverse synthetic data while filtering out poor-quality generations with provable risk control. Our method is simple to implement, requires no access to internal model logits, nor large-scale model retraining. We demonstrate the effectiveness of our approach across multiple tasks, including topic prediction, sentiment analysis, image classification, and fraud detection, showing consistent performance improvements of up to 40% in F1 score over unaugmented baselines, and 4% over other filtered augmentation baselines.