🤖 AI Summary
This work addresses the fundamental trade-off between efficiency and predictive entropy in conformal prediction (CP). Unlike conventional CP methods that optimize solely for coverage or interval size, we propose an entropy-constrained conformal calibration framework that explicitly incorporates predictive entropy as a hard constraint. Under a user-specified entropy threshold, our framework drives fine-tuning or wrapping of base models via a conformality-aware inefficiency loss, achieving Pareto-optimal balance between efficiency and uncertainty quantification. The method uncovers the intrinsic conflict mechanism between these objectives and provides a verifiable pathway for their joint optimization. Evaluated on computer vision and graph learning benchmarks, our approach achieves an average 34.4% improvement in efficiency over state-of-the-art CP methods while strictly guaranteeing controllable predictive uncertainty.
📝 Abstract
Conformal prediction (CP) provides a comprehensive framework to produce statistically rigorous uncertainty sets for black-box machine learning models. To further improve the efficiency of CP, conformal correction is proposed to fine-tune or wrap the base model with an extra module using a conformal-aware inefficiency loss. In this work, we empirically and theoretically identify a trade-off between the CP efficiency and the entropy of model prediction. We then propose an entropy-constrained conformal correction method, exploring a better Pareto optimum between efficiency and entropy. Extensive experimental results on both computer vision and graph datasets demonstrate the efficacy of the proposed method. For instance, it can significantly improve the efficiency of state-of-the-art CP methods by up to 34.4%, given an entropy threshold.