🤖 AI Summary
This work addresses performance degradation in equivariant model selection caused by symmetry bias and task–symmetry mismatch. We propose a novel uncertainty-aware paradigm, diverging from conventional error-based selection criteria. Systematically evaluating Bayesian marginal likelihood, conformal prediction intervals, and uncertainty calibration across pre-trained models with varying symmetry constraints, we find these uncertainty metrics strongly correlate with true generalization performance. Key contributions include: (i) revealing an intrinsic inconsistency between geometric and Bayesian complexity measures, demonstrating that Bayesian evidence becomes unstable under complexity misalignment; and (ii) empirically validating that uncertainty-driven selection significantly improves model-task alignment robustly. Experiments span diverse equivariant architectures—including SE(3)-, E(3)-, and SO(3)-equivariant models—and physical science tasks such as molecular property prediction and particle physics simulation. Results confirm the method’s generality and practical utility for symmetry-aware model selection.
📝 Abstract
Equivariant models leverage prior knowledge on symmetries to improve predictive performance, but misspecified architectural constraints can harm it instead. While work has explored learning or relaxing constraints, selecting among pretrained models with varying symmetry biases remains challenging. We examine this model selection task from an uncertainty-aware perspective, comparing frequentist (via Conformal Prediction), Bayesian (via the marginal likelihood), and calibration-based measures to naive error-based evaluation. We find that uncertainty metrics generally align with predictive performance, but Bayesian model evidence does so inconsistently. We attribute this to a mismatch in Bayesian and geometric notions of model complexity, and discuss possible remedies. Our findings point towards the potential of uncertainty in guiding symmetry-aware model selection.