🤖 AI Summary
Bayesian inference suffers from prior sensitivity, likelihood misspecification, and high computational cost; while Markovian Gaussian Processes (MGPs) bypass explicit prior-likelihood modeling by relying on predictive rules, they lack effective constructions for tabular data. This paper introduces TabMGP—the first integration of the foundational transformer TabPFN into the MGP framework—leveraging its autoregressive architecture to generate high-quality one-step-ahead predictive distributions, enabling forward uncertainty quantification without explicit priors or likelihoods. TabMGP significantly improves computational efficiency and practicality. Empirically, it achieves coverage rates of prediction intervals close to nominal levels across diverse tabular benchmarks, outperforming both existing MGP variants and standard Bayesian approaches in predictive accuracy and calibration.
📝 Abstract
Bayesian inference provides principled uncertainty quantification but is often limited by challenges of prior elicitation, likelihood misspecification, and computational burden. The martingale posterior (MGP, Fong et al., 2023) offers an alternative, replacing prior-likelihood elicitation with a predictive rule - namely, a sequence of one-step-ahead predictive distributions - for forward data generation. The utility of MGPs depends on the choice of predictive rule, yet the literature has offered few compelling examples. Foundation transformers are well-suited here, as their autoregressive generation mirrors this forward simulation and their general-purpose design enables rich predictive modeling. We introduce TabMGP, an MGP built on TabPFN, a transformer foundation model that is currently state-of-the-art for tabular data. TabMGP produces credible sets with near-nominal coverage and often outperforms both existing MGP constructions and standard Bayes.