🤖 AI Summary
This work addresses the coverage failure of conformal prediction (CP) under geometric distribution shifts—such as rotations and reflections—where standard CP guarantees degrade. We propose a pose-normalization-augmented CP framework, whose core innovation is the first integration of pose normalization as a geometry-aware feature extractor within the CP pipeline. Crucially, it requires no modification to the underlying black-box predictor and uniformly handles both discrete and continuous geometric transformations while preserving rigorous marginal coverage guarantees. By modeling geometric invariance through normalized features and adapting CP via a standardized interface, our method enhances robustness without compromising CP’s formal statistical assurances. Experiments demonstrate stable empirical coverage ≥95% across diverse geometric shifts, significantly outperforming equivariant models and data-augmentation baselines. Moreover, the framework is fully compatible with arbitrary pre-trained predictors.
📝 Abstract
We study the problem of conformal prediction (CP) under geometric data shifts, where data samples are susceptible to transformations such as rotations or flips. While CP endows prediction models with post-hoc uncertainty quantification and formal coverage guarantees, their practicality breaks under distribution shifts that deteriorate model performance. To address this issue, we propose integrating geometric information--such as geometric pose--into the conformal procedure to reinstate its guarantees and ensure robustness under geometric shifts. In particular, we explore recent advancements on pose canonicalization as a suitable information extractor for this purpose. Evaluating the combined approach across discrete and continuous shifts and against equivariant and augmentation-based baselines, we find that integrating geometric information with CP yields a principled way to address geometric shifts while maintaining broad applicability to black-box predictors.