🤖 AI Summary
Robbins’ estimator—the canonical empirical Bayes method for Poisson models—suffers from inherent limitations, including lack of smoothness and monotonicity and high sensitivity to sparse observations. To address these issues, this paper systematically introduces the minimum distance estimation framework into Poisson empirical Bayes estimation, proposing a novel family of estimators. Under nonparametric prior classes, the proposed estimators retain minimax regret optimality while significantly improving stability, smoothness, and monotonicity, and enhancing robustness to low-frequency events. Theoretical analysis establishes their statistical optimality, proving minimax regret bounds and consistency properties. Empirical evaluation across multiple synthetic and real-world datasets demonstrates consistent superiority over Robbins’ estimator in accuracy and reliability, with improved interpretability. The method bridges theoretical rigor and practical utility, offering a principled alternative for empirical Bayes inference in sparse-count settings.
📝 Abstract
The Robbins estimator is the most iconic and widely used procedure in the empirical Bayes literature for the Poisson model. On one hand, this method has been recently shown to be minimax optimal in terms of the regret (excess risk over the Bayesian oracle that knows the true prior) for various nonparametric classes of priors. On the other hand, it has been long recognized in practice that Robbins estimator lacks the desired smoothness and monotonicity of Bayes estimators and can be easily derailed by those data points that were rarely observed before. Based on the minimum-distance distance method, we propose a suite of empirical Bayes estimators, including the classical nonparametric maximum likelihood, that outperform the Robbins method in a variety of synthetic and real data sets and retain its optimality in terms of minimax regret.