🤖 AI Summary
Standard conformal prediction often yields overly wide or misaligned prediction intervals under heteroscedastic or skewed data due to fixed centers and equal-tailed error allocation. This work proposes the CoCP framework, which jointly optimizes the prediction center \(m(x)\) and radius \(h(x)\) for the first time. By alternately performing quantile regression on folded residuals and center correction guided by a differentiable coverage objective, CoCP avoids full conditional density estimation while guaranteeing finite-sample marginal validity and approaching conditionally optimal interval length. Integrating normalized nonconformity scores with split conformal calibration, CoCP significantly shortens prediction intervals and achieves state-of-the-art conditional coverage on both synthetic and real-world datasets.
📝 Abstract
Conformal prediction (CP) provides finite-sample, distribution-free marginal coverage, but standard conformal regression intervals can be inefficient under heteroscedasticity and skewness. In particular, popular constructions such as conformalized quantile regression (CQR) often inherit a fixed notion of center and enforce equal-tailed errors, which can displace the interval away from high-density regions and produce unnecessarily wide sets. We propose Co-optimization for Adaptive Conformal Prediction (CoCP), a framework that learns prediction intervals by jointly optimizing a center $m(x)$ and a radius $h(x)$.CoCP alternates between (i) learning $h(x)$ via quantile regression on the folded absolute residual around the current center, and (ii) refining $m(x)$ with a differentiable soft-coverage objective whose gradients concentrate near the current boundaries, effectively correcting mis-centering without estimating the full conditional density. Finite-sample marginal validity is guaranteed by split-conformal calibration with a normalized nonconformity score. Theory characterizes the population fixed point of the soft objective and shows that, under standard regularity conditions, CoCP asymptotically approaches the length-minimizing conditional interval at the target coverage level as the estimation error and smoothing vanish. Experiments on synthetic and real benchmarks demonstrate that CoCP yields consistently shorter intervals and achieves state-of-the-art conditional-coverage diagnostics.