Rate-reliability functions for deterministic identification

📅 2025-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper investigates the rate–reliability function for deterministic identification over arbitrary memoryless channels, under exponential constraints on both error probabilities: $e^{-nE_1}$ and $e^{-nE_2}$. Methodologically, it integrates information-theoretic analysis, exponential error bound theory, and geometric measure tools—including packing/covering numbers and Minkowski dimension—to characterize the asymptotic behavior of identification rates. The main contribution is the first tight characterization of the rate bounds under exponential error constraints; crucially, it introduces the Minkowski dimension of the channel output space to unify the analysis of superlinear identification rates (e.g., $Theta(n log n)$). Furthermore, it derives a refined asymptotic expansion for small positive reliability exponents: the leading term of the identification rate is $log min{E_1, E_2}$. The framework is universally extended to classical–quantum and tensor-product-constrained quantum channels, establishing a unified geometric–information-theoretic foundation for identification theory.

Technology Category

Application Category

📝 Abstract
We investigate deterministic identification over arbitrary memoryless channels under the constraint that the error probabilities of first and second kind are exponentially small in the block length $n$, controlled by reliability exponents $E_1,E_2 geq 0$. In contrast to the regime of slowly vanishing errors, where the identifiable message length scales as $Theta(nlog n)$, here we find that for positive exponents linear scaling is restored, now with a rate that is a function of the reliability exponents. We give upper and lower bounds on the ensuing rate-reliability function in terms of (the logarithm of) the packing and covering numbers of the channel output set, which for small error exponents $E_1,E_2>0$ can be expanded in leading order as the product of the Minkowski dimension of a certain parametrisation the channel output set and $logmin{E_1,E_2}$. These allow us to recover the previously observed slightly superlinear identification rates, and offer a different perspective for understanding them in more traditional information theory terms. We further illustrate our results with a discussion of the case of dimension zero, and extend them to classical-quantum channels and quantum channels with tensor product input restriction.
Problem

Research questions and friction points this paper is trying to address.

Deterministic identification over memoryless channels
Error probabilities exponentially small in block length
Rate-reliability function bounds via packing and covering numbers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deterministic identification with reliability exponents
Upper and lower bounds using packing numbers
Extension to classical-quantum and quantum channels
🔎 Similar Papers
No similar papers found.