🤖 AI Summary
This paper investigates binary hypothesis testing over adversarial channels: each hypothesis corresponds to a family of channels, and an adversary adaptively selects the channel at each position based on the true hypothesis. The transmitter designs its input sequence without knowledge of the hypothesis, and the detector performs hypothesis testing based on the channel output. The work characterizes the optimal Chernoff–Stein exponent under three transmitter strategies: deterministic encoding, private randomness (randomness unknown to the detector), and shared randomness (common randomness known to both transmitter and detector). It establishes that memoryless coding achieves the optimal exponent under shared randomness, whereas introducing memory in the encoder strictly improves the exponent under private randomness—the first demonstration of a fundamental advantage of memory in the private-randomness setting. The framework unifies adversarial channel modeling, integrates large deviations analysis with random coding theory, and fully characterizes the fundamental limits across all three randomness regimes.
📝 Abstract
We study the Chernoff-Stein exponent of the following binary hypothesis testing problem: Associated with each hypothesis is a set of channels. A transmitter, without knowledge of the hypothesis, chooses the vector of inputs to the channel. Given the hypothesis, from the set associated with the hypothesis, an adversary chooses channels, one for each element of the input vector. Based on the channel outputs, a detector attempts to distinguish between the hypotheses. We study the Chernoff-Stein exponent for the cases where the transmitter (i) is deterministic, (ii) may privately randomize, and (iii) shares randomness with the detector that is unavailable to the adversary. It turns out that while a memoryless transmission strategy is optimal under shared randomness, it may be strictly suboptimal when the transmitter only has private randomness.