🤖 AI Summary
This work addresses the limitations of conventional model priors in high-dimensional Bayesian variable selection, which often exhibit an undue preference for overly complex models, as well as the fundamental shortcomings of Jeffreys’ prior that uniformly allocates mass across model sizes. The authors systematically analyze these deficiencies and propose a novel class of objective priors that achieves a more favorable balance between theoretical rigor and empirical performance. Through comprehensive Bayesian model selection, high-dimensional inference, theoretical analysis, and numerical experiments, the proposed approach demonstrates superior accuracy in variable selection and more effective control of model complexity compared to existing methods. It also exhibits strong robustness and excellent finite-sample behavior, making it a compelling alternative for high-dimensional settings.
📝 Abstract
For many years it was routine to use equal model prior probabilities in Bayesian model uncertainty analysis. At least twenty years ago it became clear that this was problematic, leading to support of much too large models in the increasingly huge model spaces being considered in genomics and other fields. A popular replacement was to adopt a suggestion of Harold Jeffreys for the variable selection problem in which a total of $k$ possible variables are being considered for inclusion in the model: give the collection of all models containing $d$ variables ($d = 0, . . . , k$) prior probability $1/(k + 1)$ and then divide this prior probability equally among the models in the collection. Many other choices of model prior probabilities that impose severe parsimony have also been introduced. We begin by reviewing the problems with using equal model prior probabilities and then discuss some serious problems with the Jeffreys choice. Finally, we introduce and study a number of objective alternative choices of model prior probabilities, from both numerical and theoretical perspectives.