🤖 AI Summary
This paper studies sequential adversarial binary hypothesis testing: each hypothesis corresponds to a closed convex set of distributions, and an adversary—aware of the observation history—dynamically selects the generating distribution from the respective set; the detector employs variable-length sampling under a constraint on the expected sample size. Leveraging tools from information theory, large deviations theory, and convex optimization, we characterize, for the first time, the closure of the achievable error exponent pairs (i.e., type-I and type-II error exponents) in this adversarial setting. Our main contribution is a precise characterization of the optimal error exponent trade-off region under a given expected sampling length constraint. This reveals fundamental limits imposed by adversarial distribution selection on detection performance and establishes a theoretical benchmark and design principle for robust sequential detection.
📝 Abstract
We study the adversarial binary hypothesis testing problem [1] in the sequential setting. Associated with each hypothesis is a closed, convex set of distributions. Given the hypothesis, each observation is generated according to a distribution chosen (from the set associated with the hypothesis) by an adversary who has access to past observations. In the sequential setting, the number of observations the detector uses to arrive at a decision is variable; however there is a constraint on the expected number of observations used. We characterize the closure of the set of achievable pairs of error exponents.