🤖 AI Summary
Multi-output regression faces key challenges in uncertainty quantification: lack of theoretical guarantees, difficulty modeling complex output dependencies, and high computational overhead. Method: This paper introduces two novel conformal scoring functions: (i) a model-agnostic score enabling asymptotically conditional coverage for arbitrary generative models; and (ii) an invertible-model-based score drastically reducing inference cost. It further establishes the first systematic comparison framework for multi-output conformal prediction and constructs multivariate prediction intervals with finite-sample coverage guarantees. Contribution/Results: Evaluated fairly across 32 tabular datasets using a unified open-source codebase, the proposed methods consistently outperform state-of-the-art approaches in both coverage accuracy and computational efficiency. The work delivers a new paradigm for uncertainty quantification in multi-target regression—rigorous in theory, scalable in practice, and broadly applicable to diverse generative modeling architectures.
📝 Abstract
Quantifying uncertainty in multivariate regression is essential in many real-world applications, yet existing methods for constructing prediction regions often face limitations such as the inability to capture complex dependencies, lack of coverage guarantees, or high computational cost. Conformal prediction provides a robust framework for producing distribution-free prediction regions with finite-sample coverage guarantees. In this work, we present a unified comparative study of multi-output conformal methods, exploring their properties and interconnections. Based on our findings, we introduce two classes of conformity scores that achieve asymptotic conditional coverage: one is compatible with any generative model, and the other offers low computational cost by leveraging invertible generative models. Finally, we conduct a comprehensive empirical study across 32 tabular datasets to compare all the multi-output conformal methods considered in this work. All methods are implemented within a unified code base to ensure a fair and consistent comparison.