🤖 AI Summary
This study investigates the non-asymptotic trade-off between Type-I and Type-II error probabilities in binary hypothesis testing under finite-sample settings, with a focus on the strong converse problem under an exponentially decaying constraint on the Type-I error. By introducing the reverse Rényi divergence, the work establishes a novel non-asymptotic upper bound on the Type-II error probability that strictly improves upon existing results. Theoretical analysis reveals a sharp phase transition governed by the Kullback–Leibler (KL) divergence: when the exponential decay rate of the Type-I error exceeds the KL divergence, the Type-II error probability converges exponentially to one; otherwise, it converges exponentially to zero. Numerical experiments corroborate the tightness and superiority of the proposed bound.
📝 Abstract
This work investigates binary hypothesis testing between $H_0\sim P_0$ and $H_1\sim P_1$ in the finite-sample regime under asymmetric error constraints. By employing the ``reverse"R\'enyi divergence, we derive novel non-asymptotic bounds on the Type II error probability which naturally establish a strong converse result. Furthermore, when the Type I error is constrained to decay exponentially with a rate $c$, we show that the Type II error converges to 1 exponentially fast if $c$ exceeds the Kullback-Leibler divergence $D(P_1\|P_0)$, and vanishes exponentially fast if $c$ is smaller. Finally, we present numerical examples demonstrating that the proposed converse bounds strictly improve upon existing finite-sample results in the literature.