🤖 AI Summary
This work addresses the limited informativeness of traditional conformal prediction in long-horizon tasks, where predictive uncertainty regions often inflate excessively. To mitigate this issue, the authors propose an equivariant conformal prediction framework that incorporates group symmetries as geometric priors into pretrained models. By sharing nonconformity scores among samples within the same group orbit, the method effectively compresses prediction sets. Theoretical analysis demonstrates that this approach contracts nonconformity scores in the convex order, thereby enhancing prediction sharpness at high confidence levels. Empirical evaluation on pedestrian trajectory forecasting shows that the proposed method significantly narrows prediction intervals, yielding substantially tighter and more informative uncertainty quantification—particularly under stringent confidence requirements.
📝 Abstract
We study the effect of group symmetrization of pre-trained models on conformal prediction (CP), a post-hoc, distribution-free, finite-sample method of uncertainty quantification that offers formal coverage guarantees under the assumption of data exchangeability. Unfortunately, CP uncertainty regions can grow significantly in long horizon missions, rendering the statistical guarantees uninformative. To that end, we propose infusing CP with geometric information via group-averaging of the pretrained predictor to distribute the non-conformity mass across the orbits. Each sample now is treated as a representative of an orbit, thus uncertainty can be mitigated by other samples entangled to it via the orbit inducing elements of the symmetry group. Our approach provably yields contracted non-conformity scores in increasing convex order, implying improved exponential-tail bounds and sharper conformal prediction sets in expectation, especially at high confidence levels. We then propose an experimental design to test these theoretical claims in pedestrian trajectory prediction.