🤖 AI Summary
This work investigates the minimum blocklength—i.e., sample complexity—required for lossless data compression under non-asymptotic regimes, subject to constraints on coding rate and excess probability. It introduces the notion of sample complexity into lossless compression for the first time, establishing a fundamental connection between compression and identity testing through tools from non-asymptotic information theory, Rényi entropy of order 1/2, and hypothesis testing. Tight bounds on the sample complexity are derived for memoryless sources, Markov sources, and universal settings, revealing that the required number of samples is governed by the Rényi entropy rate or the minimal Rényi divergence rather than the Shannon entropy. The analysis also explicitly characterizes the constant terms appearing in these bounds.
📝 Abstract
A new framework is introduced for examining and evaluating the fundamental limits of lossless data compression, that emphasizes genuinely non-asymptotic results. The {\em sample complexity} of compressing a given source is defined as the smallest blocklength at which it is possible to compress that source at a specified rate and to within a specified excess-rate probability. This formulation parallels corresponding developments in statistics and computer science, and it facilitates the use of existing results on the sample complexity of various hypothesis testing problems. For arbitrary sources, the sample complexity of general variable-length compressors is shown to be tightly coupled with the sample complexity of prefix-free codes and fixed-length codes. For memoryless sources, it is shown that the sample complexity is characterized not by the source entropy, but by its R\'{e}nyi entropy of order~$1/2$. Nonasymptotic bounds on the sample complexity are obtained, with explicit constants. Generalizations to Markov sources are established, showing that the sample complexity is determined by the source's R\'{e}nyi entropy rate of order~$1/2$. Finally, bounds on the sample complexity of universal data compression are developed for arbitrary families of memoryless sources. There, the sample complexity is characterized by the minimum R\'{e}nyi divergence of order~$1/2$ between elements of the family and the uniform distribution. The connection of this problem with identity testing and with the associated separation rates is explored and discussed.