On Equivariant Model Selection through the Lens of Uncertainty

📅 2025-06-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses performance degradation in equivariant model selection caused by symmetry bias and task–symmetry mismatch. We propose a novel uncertainty-aware paradigm, diverging from conventional error-based selection criteria. Systematically evaluating Bayesian marginal likelihood, conformal prediction intervals, and uncertainty calibration across pre-trained models with varying symmetry constraints, we find these uncertainty metrics strongly correlate with true generalization performance. Key contributions include: (i) revealing an intrinsic inconsistency between geometric and Bayesian complexity measures, demonstrating that Bayesian evidence becomes unstable under complexity misalignment; and (ii) empirically validating that uncertainty-driven selection significantly improves model-task alignment robustly. Experiments span diverse equivariant architectures—including SE(3)-, E(3)-, and SO(3)-equivariant models—and physical science tasks such as molecular property prediction and particle physics simulation. Results confirm the method’s generality and practical utility for symmetry-aware model selection.

Technology Category

Application Category

📝 Abstract
Equivariant models leverage prior knowledge on symmetries to improve predictive performance, but misspecified architectural constraints can harm it instead. While work has explored learning or relaxing constraints, selecting among pretrained models with varying symmetry biases remains challenging. We examine this model selection task from an uncertainty-aware perspective, comparing frequentist (via Conformal Prediction), Bayesian (via the marginal likelihood), and calibration-based measures to naive error-based evaluation. We find that uncertainty metrics generally align with predictive performance, but Bayesian model evidence does so inconsistently. We attribute this to a mismatch in Bayesian and geometric notions of model complexity, and discuss possible remedies. Our findings point towards the potential of uncertainty in guiding symmetry-aware model selection.
Problem

Research questions and friction points this paper is trying to address.

Selecting equivariant models with correct symmetry biases
Evaluating uncertainty metrics for model selection accuracy
Addressing mismatch in Bayesian and geometric model complexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Equivariant models use symmetry for better predictions
Compare uncertainty metrics for model selection
Bayesian evidence mismatches geometric model complexity
🔎 Similar Papers
P
Putri A. van der Linden
Amsterdam Machine Learning Lab, University of Amsterdam
Alexander Timans
Alexander Timans
University of Amsterdam
machine learningprobabilistic inferenceuncertainty quantificationconformal prediction
Dharmesh Tailor
Dharmesh Tailor
University of Amsterdam
Probabilistic Machine Learning
E
Erik J. Bekkers
Amsterdam Machine Learning Lab, University of Amsterdam