🤖 AI Summary
This work addresses the fundamental problem of characterizing the minimum achievable redundancy in lossless channel simulation for data compression. For i.i.d. channel simulation, we establish—rigorously and for the first time—the tight asymptotic lower bound on redundancy as $ frac{1}{2} $ bit per symbol, overcoming limitations of prior analytical frameworks. To this end, we introduce *channel simulation divergence*, a novel information-theoretic measure that yields a universal lower bound on simulation redundancy. We provide two independent proofs: one via second-order asymptotic expansion and another grounded in large deviations theory. These results complete and extend the Sriram–Wagner theory of channel simulation. Moreover, they deliver the first precise theoretical performance benchmark for learning-based compression algorithms—particularly simulation-based rate-distortion optimization methods—revealing an inherent, unavoidable rate overhead that cannot be circumvented by any algorithmic design.
📝 Abstract
Channel simulation is an alternative to quantization and entropy coding for performing lossy source coding. Recently, channel simulation has gained significant traction in both the machine learning and information theory communities, as it integrates better with machine learning-based data compression algorithms and has better rate-distortion-perception properties than quantization. As the practical importance of channel simulation increases, it is vital to understand its fundamental limitations. Recently, Sriramu and Wagner provided an almost complete characterisation of the redundancy of channel simulation algorithms. In this paper, we complete this characterisation. First, we significantly extend a result of Li and El Gamal, and show that the redundancy of any instance of a channel simulation problem is lower bounded by the channel simulation divergence. Second, we give two proofs that the asymptotic redundancy of simulating iid non-singular channels is lower-bounded by $1/2$: one using a direct approach based on the asymptotic expansion of the channel simulation divergence and one using large deviations theory.