đ€ AI Summary
This study systematically evaluates the uncertainty calibration capabilities of vision and multimodal foundation models within the conformal prediction (CP) framework, with emphasis on risk-sensitive applications. We examine prevalent Vision Transformer (ViT) architectures, three canonical CP methodsâAdaptive Prediction Sets (APS), Split CP, and Adaptive CPâand multiple image classification benchmarks. Our key contributions are: (i) ViT-based models exhibit inherent compatibility with CP, achieving well-calibrated uncertainty estimates without retraining; (ii) adapter-based fine-tuning substantially outperforms prompt learning for CP adaptation; (iii) APS achieves the optimal trade-off between theoretical guarantees and empirical performanceâstrictly maintaining marginal coverage while yielding more compact prediction sets; and (iv) post-hoc confidence calibration degrades Adaptive CPâs efficacy. Collectively, these findings demonstrate that modern foundation models possess strong inherent conformalizability, offering a robust pathway for uncertainty quantification in high-stakes visual recognition tasks.
đ Abstract
Recent advances in self-supervision and constrastive learning have brought the performance of foundation models to unprecedented levels in a variety of tasks. Fueled by this progress, these models are becoming the prevailing approach for a wide array of real-world vision problems, including risk-sensitive and high-stakes applications. However, ensuring safe deployment in these scenarios requires a more comprehensive understanding of their uncertainty modeling capabilities, which has been barely explored. In this work, we delve into the behavior of vision and vision-language foundation models under Conformal Prediction (CP), a statistical framework that provides theoretical guarantees of marginal coverage of the true class. Across extensive experiments including popular vision classification benchmarks, well-known foundation vision models, and three CP methods, our findings reveal that foundation models are well-suited for conformalization procedures, particularly those integrating Vision Transformers. Furthermore, we show that calibrating the confidence predictions of these models leads to efficiency degradation of the conformal set on adaptive CP methods. In contrast, few-shot adaptation to downstream tasks generally enhances conformal scores, where we identify Adapters as a better conformable alternative compared to Prompt Learning strategies. Our empirical study identifies APS as particularly promising in the context of vision foundation models, as it does not violate the marginal coverage property across multiple challenging, yet realistic scenarios.