🤖 AI Summary
This paper addresses the limitation of conventional Bayesian inference, which relies on prespecified parametric models and prior distributions. To overcome this, we propose a model-free prediction-inference framework that requires neither an explicit statistical model nor a prior distribution; instead, valid statistical inference is achieved solely through prediction rules satisfying a generalized martingale condition. Methodologically, leveraging the theory of asymptotically conditionally independent and identically distributed (a.c.i.d.) sequences and bootstrap-type resampling mechanisms, we unify heterogeneous prediction approaches—including kernel estimation, parametric Bayesian bootstrap, and copula modeling—within this framework. Crucially, we systematically relax classical assumptions for the first time using generalized martingale theory. Our primary contribution is a substantial expansion of the class of prediction rules that support valid inference, thereby extending the applicability beyond existing model-free Bayesian methods. This advancement enhances flexibility, universality, and theoretical inclusiveness of nonparametric Bayesian inference.
📝 Abstract
There is a growing interest in procedures for Bayesian inference that bypass the need to specify a model and prior but simply rely on a predictive rule that describes how we learn on future observations given the available ones. At the heart of the idea is a bootstrap-type scheme that allows us to move from the realm of prediction to that of inference. Which conditions the predictive rule needs to satisfy to produce valid inference is a key question. In this work, we substantially relax previous assumptions building on a generalization of martingales, opening up the possibility of employing a much wider range of predictive rules that were previously ruled out. These include ``old" ideas in Statistics and Learning Theory, such as kernel estimators, and more novel ones, such as the parametric Bayesian bootstrap or copula-based algorithms. Our aim is not to advocate in favor of one predictive rule versus the other ones, but rather to showcase the benefits of working with this larger class of predictive rules.