🤖 AI Summary
Existing full-ranking conformal prediction methods are overly conservative due to their reliance on upper bounds of nonconformity scores, often yielding excessively large prediction sets. This work proposes a novel approach that leverages the negative hypergeometric distribution of absolute ranks among calibration samples, conditioned on relative ranks, to derive the exact distribution of nonconformity scores. By doing so, it constructs tighter conformal thresholds while rigorously maintaining the required coverage guarantee. The method significantly improves prediction set efficiency, reducing average set size by up to 36% compared to current baselines, thereby offering a marked advancement in practical applicability without compromising theoretical validity.
📝 Abstract
Quantifying uncertainty is critical for the safe deployment of ranking models in real-world applications. Recent work offers a rigorous solution using conformal prediction in a full ranking scenario, which aims to construct prediction sets for the absolute ranks of test items based on the relative ranks of calibration items. However, relying on upper bounds of non-conformity scores renders the method overly conservative, resulting in substantially large prediction sets. To address this, we propose Distribution-informed Conformal Ranking (DCR), which produces efficient prediction sets by deriving the exact distribution of non-conformity scores. In particular, we find that the absolute ranks of calibration items follow Negative Hypergeometric distributions, conditional on their relative ranks. DCR thus uses the rank distribution to derive non-conformity score distribution and determine conformal thresholds. We provide theoretical guarantees that DCR achieves improved efficiency over the baseline while ensuring valid coverage under mild assumptions. Extensive experiments demonstrate the superiority of DCR, reducing average prediction set size by up to 36%, while maintaining valid coverage.