🤖 AI Summary
This paper investigates the rate–reliability function for deterministic identification over arbitrary memoryless channels, under exponential constraints on both error probabilities: $e^{-nE_1}$ and $e^{-nE_2}$. Methodologically, it integrates information-theoretic analysis, exponential error bound theory, and geometric measure tools—including packing/covering numbers and Minkowski dimension—to characterize the asymptotic behavior of identification rates. The main contribution is the first tight characterization of the rate bounds under exponential error constraints; crucially, it introduces the Minkowski dimension of the channel output space to unify the analysis of superlinear identification rates (e.g., $Theta(n log n)$). Furthermore, it derives a refined asymptotic expansion for small positive reliability exponents: the leading term of the identification rate is $log min{E_1, E_2}$. The framework is universally extended to classical–quantum and tensor-product-constrained quantum channels, establishing a unified geometric–information-theoretic foundation for identification theory.
📝 Abstract
We investigate deterministic identification over arbitrary memoryless channels under the constraint that the error probabilities of first and second kind are exponentially small in the block length $n$, controlled by reliability exponents $E_1,E_2 geq 0$. In contrast to the regime of slowly vanishing errors, where the identifiable message length scales as $Theta(nlog n)$, here we find that for positive exponents linear scaling is restored, now with a rate that is a function of the reliability exponents. We give upper and lower bounds on the ensuing rate-reliability function in terms of (the logarithm of) the packing and covering numbers of the channel output set, which for small error exponents $E_1,E_2>0$ can be expanded in leading order as the product of the Minkowski dimension of a certain parametrisation the channel output set and $logmin{E_1,E_2}$. These allow us to recover the previously observed slightly superlinear identification rates, and offer a different perspective for understanding them in more traditional information theory terms. We further illustrate our results with a discussion of the case of dimension zero, and extend them to classical-quantum channels and quantum channels with tensor product input restriction.