Deterministic Identification Over Channels with Finite Output: A Dimensional Perspective on Superlinear Rates

📅 2024-02-14
🏛️ International Symposium on Information Theory
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the maximal growth rate of message size for deterministic identification (DI) over memoryless channels with finite output alphabets. It establishes that the number of identifiable messages grows super-exponentially as $2^{R n log n}$ in blocklength $n$, where the optimal DI rate $R$ is characterized by the covering dimension $d$ of the output distribution set within the probability simplex. The paper provides the first quantitative link between DI capacity and geometric dimension, deriving tight bounds $d/4 leq R leq d/2$. It proves that pairwise distinguishability suffices for constructing DI codes and reveals a super-activation phenomenon: tensor products of single-letter channels with zero DI capacity can achieve positive DI capacity. Methodologically, the analysis integrates covering dimension theory, hypothesis testing lemmas, and algebraic transformations. The results are extended to finite-dimensional classical-quantum and quantum channels.

Technology Category

Application Category

📝 Abstract
Following initial work by JaJa, and Ahlswede and Cai, and inspired by a recent renewed surge in interest in deterministic identification (DI) via noisy channels, we consider the problem in its generality for memoryless channels with finite output, but arbitrary input alphabets. Such a channel is essentially given by (the closure of) the subset of its output distributions in the probability simplex. Our main findings are that the maximum number of messages thus identifiable scales super-exponentially as $2^{Rnlog n}$ with the block length $n$, and that the optimal rate $R$ is upper and lower bounded in terms of the covering (aka Minkowski, or Kolmogorov, or entropy) dimension $d$ of a certain algebraic transformation of the output set: $frac{1}{4}dleq Rleqfrac{1}{2}d$, Along the way, we present a Hypothesis Testing Lemma that shows it is sufficient to ensure pairwise reliable distinguishability of the output distributions to construct a DI code. Although we do not know the exact capacity formula, we can conclude that the DI capacity exhibits super-activation: there exist channels whose capacity is zero, but whose product has positive capacity. These results are then generalised to classical-quantum channels with finite-dimensional output quantum system (but arbitrary input alphabet), and in particular to quantum channels on finite-dimensional quantum systems under the constraint that the identification code can only use tensor product inputs.
Problem

Research questions and friction points this paper is trying to address.

Deterministic identification over noisy channels
Superlinear scaling of message length
Bounds on optimal rate via covering dimension
Innovation

Methods, ideas, or system contributions that make the work stand out.

Superlinear message identification scaling
Covering dimension bounds optimal rate
Pairwise reliable distinguishability for DI codes
🔎 Similar Papers
No similar papers found.