🤖 AI Summary
This paper challenges the conventional wisdom that Bayesian updating is necessarily optimal under model misspecification, investigating whether non-Bayesian learning can provably outperform it in such settings. We develop a unified framework integrating asymptotic statistical analysis, learning dynamics modeling, and counterfactual performance comparison. Within this framework, we provide the first systematic theoretical demonstration that certain falsifiable non-Bayesian update rules achieve superior performance across multiple classes of misspecified environments—exhibiting faster convergence rates, lower cumulative regret, and enhanced robustness. Our core contribution is the formal proposal of a class of non-Bayesian learning criteria satisfying the principle of falsifiability, coupled with rigorous proofs of their performance advantages under model misspecification. This work establishes a novel paradigm and benchmark for misspecified learning theory, advancing foundational understanding beyond classical Bayesian optimality assumptions.
📝 Abstract
Deviations from Bayesian updating are traditionally categorized as biases, errors, or fallacies, thus implying their inherent ``sub-optimality.'' We offer a more nuanced view. We demonstrate that, in learning problems with misspecified models, non-Bayesian updating can outperform Bayesian updating.