🤖 AI Summary
This paper addresses the challenge of jointly leveraging multiple conformity scores in multi-quantile conformal prediction to shrink prediction sets while strictly guaranteeing coverage. We propose the Confidence-Level Allocation (COLA) framework, which optimally allocates confidence levels across multiple scores—rather than selecting a single optimal score—to minimize prediction set size under guaranteed marginal coverage. COLA introduces three flexible allocation mechanisms: sample splitting, full conformalization, and local adaptive allocation, integrated with empirical risk minimization for joint optimization over multiple scores. Experiments on synthetic and real-world datasets demonstrate that COLA significantly outperforms state-of-the-art methods: it achieves strict finite-sample coverage guarantees while substantially reducing prediction set size and improving conditional coverage performance.
📝 Abstract
Conformal prediction offers a distribution-free framework for constructing prediction sets with finite-sample coverage. Yet, efficiently leveraging multiple conformity scores to reduce prediction set size remains a major open challenge. Instead of selecting a single best score, this work introduces a principled aggregation strategy, COnfidence-Level Allocation (COLA), that optimally allocates confidence levels across multiple conformal prediction sets to minimize empirical set size while maintaining provable coverage. Two variants are further developed, COLA-s and COLA-f, which guarantee finite-sample marginal coverage via sample splitting and full conformalization, respectively. In addition, we develop COLA-l, an individualized allocation strategy that promotes local size efficiency while achieving asymptotic conditional coverage. Extensive experiments on synthetic and real-world datasets demonstrate that COLA achieves considerably smaller prediction sets than state-of-the-art baselines while maintaining valid coverage.