🤖 AI Summary
This paper investigates the monotonicity of expected utility for e-variables under conditioning on a sufficient statistic, drawing an analogy to the Rao–Blackwell theorem’s MSE-reduction property. Using Jensen’s inequality, sufficiency theory, and the e-variable framework, we prove that for any concave utility function, conditioning an e-variable on a sufficient statistic never decreases its expected utility. This establishes, for the first time, a utility-improvement relationship between e-variables and sufficient statistics, and uniformly extends it to composite e-variables, asymptotic e-variables, and e-processes. Furthermore, for linear regression with known variance, we derive a closed-form expression for the log-optimal e-variable, substantially reducing computational complexity. The result provides both theoretical foundations and practical tools for constructing efficient e-variables in sequential and adaptive inference settings.
📝 Abstract
We show that for any concave utility, the expected utility of an e-variable can only increase after conditioning on a sufficient statistic. The simplest form of the result has an extremely straightforward proof, which follows from a single application of Jensen's inequality. Similar statements hold for compound e-variables, asymptotic e-variables, and e-processes. These results echo the Rao-Blackwell theorem, which states that the expected squared error of an estimator can only decrease after conditioning on a sufficient statistic. We provide several applications of this insight, including a simplified derivation of the log-optimal e-variable for linear regression with known variance.