🤖 AI Summary
Existing theoretical characterizations of proper scoring rules for probabilistic forecasting and distribution estimation are fragmented and lack methodological clarity.
Method: Drawing on convex analysis, information geometry, and decision theory, this paper systematically unifies general characterization theorems with canonical rule families—including logarithmic score and Brier score—establishing rigorous criteria for score propriety and a principled optimization framework.
Contribution/Results: We prove, for the first time, the equivalence between proper scoring rules as unbiased estimation tools and as consistent evaluation criteria for probabilistic forecasts. This foundational result provides a unified theoretical basis for Bayesian updating, density estimation, and model calibration. It significantly extends the methodological scope and applicability of proper scoring rules in statistical inference and machine learning, clarifying their role in both theoretical foundations and practical algorithm design.
📝 Abstract
Proper scoring rules have been a subject of growing interest in recent years, not only as tools for evaluation of probabilistic forecasts but also as methods for estimating probability distributions. In this article, we review the mathematical foundations of proper scoring rules including general characterization results and important families of scoring rules. We discuss their role in statistics and machine learning for estimation and forecast evaluation. Furthermore, we comment on interesting developments of their usage in applications.