Cost-Sensitive Conformal Training with Provably Controllable Learning Bounds

📅 2025-11-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In existing conformal prediction frameworks, training methods relying on surrogate indicator functions (e.g., Sigmoid or erf) lack a unified error bound, rendering the learned prediction set boundaries uncontrolled. To address this, we propose a novel conformal training paradigm that eliminates surrogate function approximation entirely. Our approach directly optimizes the expected size of the prediction set by integrating a cost-sensitive loss and a true-label ranking–weighted strategy. We provide theoretical guarantees showing that the upper bound on prediction set size is tightly governed by the expected true-label ranking—establishing, for the first time, provably controllable learning boundaries. Empirically, under strict 1−α coverage guarantee, our method reduces the average prediction set size by 21.38%, significantly enhancing both the efficiency and reliability of uncertainty quantification.

Technology Category

Application Category

📝 Abstract
Conformal prediction (CP) is a general framework to quantify the predictive uncertainty of machine learning models that uses a set prediction to include the true label with a valid probability. To align the uncertainty measured by CP, conformal training methods minimize the size of the prediction sets. A typical way is to use a surrogate indicator function, usually Sigmoid or Gaussian error function. However, these surrogate functions do not have a uniform error bound to the indicator function, leading to uncontrollable learning bounds. In this paper, we propose a simple cost-sensitive conformal training algorithm that does not rely on the indicator approximation mechanism. Specifically, we theoretically show that minimizing the expected size of prediction sets is upper bounded by the expected rank of true labels. To this end, we develop a rank weighting strategy that assigns the weight using the rank of true label on each data sample. Our analysis provably demonstrates the tightness between the proposed weighted objective and the expected size of conformal prediction sets. Extensive experiments verify the validity of our theoretical insights, and superior empirical performance over other conformal training in terms of predictive efficiency with 21.38% reduction for average prediction set size.
Problem

Research questions and friction points this paper is trying to address.

Develops conformal training without indicator approximation for uncertainty quantification
Addresses uncontrollable learning bounds from surrogate functions in prediction sets
Minimizes expected prediction set size via theoretical rank-weighting strategy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cost-sensitive conformal training without indicator approximation
Rank weighting strategy using true label positions
Theoretically tight bound for prediction set size minimization
🔎 Similar Papers
X
Xuesong Jia
School of EECS, Washington State University
Y
Yuanjie Shi
School of EECS, Washington State University
Ziquan Liu
Ziquan Liu
Assistant Professor, Queen Mary University of London
machine learning
Y
Yi Xu
Dalian University of Technology
Y
Yan Yan
School of EECS, Washington State University