Conformal Risk Minimization with Variance Reduction

📅 2024-11-03
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address the low sample efficiency of conformal training (ConfTr), which induces high gradient noise and training instability, this paper proposes the first variance-reduced conformal risk minimization framework for black-box models. Our core innovation lies in the first integration of variance reduction techniques from stochastic optimization—specifically SAGA and control variates—into gradient estimation for ConfTr, enabling joint optimization of prediction set size and statistical coverage guarantees. This significantly reduces gradient variance, thereby enhancing training stability and accelerating convergence. Experiments across multiple benchmark datasets demonstrate that our method achieves smaller average prediction set sizes and faster convergence compared to standard ConfTr and other baselines, while preserving predictive accuracy and computational efficiency. By mitigating the trade-off between statistical rigor and practical scalability, our approach advances the deployability of conformal prediction in real-world applications.

Technology Category

Application Category

📝 Abstract
Conformal prediction (CP) is a distribution-free framework for achieving probabilistic guarantees on black-box models. CP is generally applied to a model post-training. Recent research efforts, on the other hand, have focused on optimizing CP efficiency during training. We formalize this concept as the problem of conformal risk minimization (CRM). In this direction, conformal training (ConfTr) by Stutz et al.(2022) is a technique that seeks to minimize the expected prediction set size of a model by simulating CP in-between training updates. Despite its potential, we identify a strong source of sample inefficiency in ConfTr that leads to overly noisy estimated gradients, introducing training instability and limiting practical use. To address this challenge, we propose variance-reduced conformal training (VR-ConfTr), a CRM method that incorporates a variance reduction technique in the gradient estimation of the ConfTr objective function. Through extensive experiments on various benchmark datasets, we demonstrate that VR-ConfTr consistently achieves faster convergence and smaller prediction sets compared to baselines.
Problem

Research questions and friction points this paper is trying to address.

Optimizes conformal prediction efficiency
Reduces gradient estimation variance
Improves training stability and convergence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Variance-reduced conformal training technique
Optimizes conformal prediction efficiency
Ensures faster convergence, smaller sets
🔎 Similar Papers
No similar papers found.