A New Look at Bayesian Testing

📅 2026-02-11
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the theoretical gap between classical hypothesis testing with fixed significance levels and Bayesian methods, particularly in light of the Lindley paradox. By leveraging moderate deviation theory, the authors develop a unified Bayesian framework for hypothesis testing. Through Bayesian risk analysis and asymptotic expansions, they show that the optimal test threshold operates on the scale of √(log n / n), naturally yielding Jeffreys’ threshold, the BIC penalty term, and the Chernoff–Stein error exponent. This framework not only resolves the Lindley paradox but also extends Rubin’s (1965) program to modern settings such as high-dimensional sparse inference, goodness-of-fit testing, and model selection. Moreover, it establishes the superiority of Bayesian procedures over classical Neyman–Pearson tests in terms of statistical risk.

Technology Category

Application Category

📝 Abstract
We develop a unified framework for Bayesian hypothesis testing through the theory of moderate deviations, providing explicit asymptotic expansions for Bayes risk and optimal test statistics. Our analysis reveals that Bayesian test cutoffs operate on the moderate deviation scale $\sqrt{\log n/n}$, in sharp contrast to the sample-size-invariant calibrations of classical testing. This fundamental difference explains the Lindley paradox and establishes the risk-theoretic superiority of Bayesian procedures over fixed-$\alpha$ Neyman-Pearson tests. We extend the seminal Rubin (1965) program to contemporary settings including high-dimensional sparse inference, goodness-of-fit testing, and model selection. The framework unifies several classical results: Jeffreys'$\sqrt{\log n}$ threshold, the BIC penalty $(d/2)\log n$, and the Chernoff-Stein error exponents all emerge naturally from moderate deviation analysis of Bayes risk. Our results provide theoretical foundations for adaptive significance levels and connect Bayesian testing to information theory through gambling-based interpretations.
Problem

Research questions and friction points this paper is trying to address.

Bayesian hypothesis testing
Lindley paradox
moderate deviations
Bayes risk
Neyman-Pearson tests
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bayesian hypothesis testing
moderate deviations
Bayes risk
adaptive significance levels
information theory
🔎 Similar Papers
No similar papers found.