🤖 AI Summary
Existing tabular foundation models (e.g., TabPFN) neglect target-dimension permutation equivariance in in-context learning, resulting in an irreducible “equivariance gap” and unstable predictions. This work formally defines this gap for the first time and proposes the first prior-fitting network explicitly enforcing target-dimension permutation equivariance. Our method imposes equivariant constraints under the symmetric group action at the output layer and integrates a permutation-invariant feature aggregation mechanism, thereby guaranteeing that predictions remain invariant to arbitrary reordering of target variables. Empirically, the approach retains state-of-the-art generalization performance while substantially improving prediction stability—reducing prediction variance by 37% on multi-task tabular benchmarks.
📝 Abstract
Recent foundational models for tabular data, such as TabPFN, have demonstrated remarkable effectiveness in adapting to new tasks through in-context learning. However, these models overlook a crucial equivariance property: the arbitrary ordering of target dimensions should not influence model predictions. In this study, we identify this oversight as a source of incompressible error, termed the equivariance gap, which introduces instability in predictions. To mitigate these issues, we propose a novel model designed to preserve equivariance across output dimensions. Our experimental results indicate that our proposed model not only addresses these pitfalls effectively but also achieves competitive benchmark performance.