🤖 AI Summary
This paper bridges the theoretical gap between classical regularization methods (e.g., ridge regression, nonnegative garrote) and Bayesian hierarchical modeling by establishing a unified global–local regularization framework for adaptive shrinkage in high-dimensional statistics. Methodologically, it pioneers the integration of isotonic empirical Bayes estimation with minimax risk theory over sparse ordered model classes; it estimates local regularization strengths via marginal likelihood optimization under order constraints and generalizes Stein’s positive-part estimator, thereby uncovering intrinsic connections among empirical Bayes, shape-constrained estimation, and degrees-of-freedom adjustment. Theoretically, it achieves near-minimax risk—up to logarithmic factors—over sparse ordered models. Empirically, the framework demonstrates both flexibility and robustness in orthogonal polynomial regression.
📝 Abstract
We propose a unified framework for global-local regularization that bridges the gap between classical techniques -- such as ridge regression and the nonnegative garotte -- and modern Bayesian hierarchical modeling. By estimating local regularization strengths via marginal likelihood under order constraints, our approach generalizes Stein's positive-part estimator and provides a principled mechanism for adaptive shrinkage in high-dimensional settings. We establish that this isotonic empirical Bayes estimator achieves near-minimax risk (up to logarithmic factors) over sparse ordered model classes, constituting a significant advance in high-dimensional statistical inference. Applications to orthogonal polynomial regression demonstrate the methodology's flexibility, while our theoretical results clarify the connections between empirical Bayes, shape-constrained estimation, and degrees-of-freedom adjustments.