Filtering with Confidence: When Data Augmentation Meets Conformal Prediction

📅 2025-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address distributional shift and uncontrolled quality in synthetic data augmentation, this paper proposes the first conformal prediction–based framework for synthetic data selection. Without requiring access to model parameters or retraining, it enables theoretically guaranteed risk control under a black-box setting: conformal p-values quantify the consistency of generated samples with the original data distribution, enabling dynamic filtering of low-quality or high-bias instances. Compared to the no-augmentation baseline, our method achieves up to a 40% improvement in F1 score; against existing filtering-based augmentation approaches, it yields an average 4% gain, significantly enhancing model robustness and generalization. The core contribution lies in the first application of conformal prediction to quality control in data augmentation—uniquely bridging statistical rigor with engineering practicality.

Technology Category

Application Category

📝 Abstract
With promising empirical performance across a wide range of applications, synthetic data augmentation appears a viable solution to data scarcity and the demands of increasingly data-intensive models. Its effectiveness lies in expanding the training set in a way that reduces estimator variance while introducing only minimal bias. Controlling this bias is therefore critical: effective data augmentation should generate diverse samples from the same underlying distribution as the training set, with minimal shifts. In this paper, we propose conformal data augmentation, a principled data filtering framework that leverages the power of conformal prediction to produce diverse synthetic data while filtering out poor-quality generations with provable risk control. Our method is simple to implement, requires no access to internal model logits, nor large-scale model retraining. We demonstrate the effectiveness of our approach across multiple tasks, including topic prediction, sentiment analysis, image classification, and fraud detection, showing consistent performance improvements of up to 40% in F1 score over unaugmented baselines, and 4% over other filtered augmentation baselines.
Problem

Research questions and friction points this paper is trying to address.

Filtering poor-quality synthetic data generations with provable risk control
Addressing data scarcity while minimizing distribution shifts in augmentation
Improving model performance across multiple tasks without retraining requirements
Innovation

Methods, ideas, or system contributions that make the work stand out.

Conformal prediction filters poor-quality synthetic data
Method requires no model logits or retraining
Framework provides provable risk control for augmentation
🔎 Similar Papers
No similar papers found.
Zixuan Wu
Zixuan Wu
Georgia Institute of Technology
Robotics
S
So Won Jeong
Booth Business School, University of Chicago
Y
Yating Liu
Department of Statistics, University of Chicago
Y
Yeo Jin Jung
Department of Statistics, University of Chicago
Claire Donnat
Claire Donnat
University of Chicago
Statisticsgraphsbiomedical data analysislatent variable modelsbrain connectomics