🤖 AI Summary
This paper investigates second-order achievable rate bounds for source and channel coding problems with side information—namely, the Wyner–Ziv, Heegard–Berger, and cost-constrained Gelfand–Pinsker problems—in network information theory. To overcome limitations of existing second-order analysis frameworks, we introduce a novel analytical paradigm termed “type deviation convergence,” which uniformly characterizes the statistical dependence induced by side information in both source and channel coding settings. By integrating type theory, deviation analysis, random coding, and joint typicality decoding, we rigorously derive tight second-order rate bounds for multiple problem classes. Our bounds strictly improve upon prior results by Watanabe–Kuzuoka–Tan, Yassaee–Aref–Gohari, and Li–Anantharam. The framework provides a more general and precise theoretical tool for finite-blocklength performance analysis of side-information–assisted coding.
📝 Abstract
We propose a framework for second-order achievability, called type deviation convergence, that is generally applicable to settings in network information theory, and is especially suitable for lossy source coding and channel coding with cost. We give a second-order achievability bound for lossy source coding with side information at the decoder (Wyner-Ziv problem) that improves upon all known bounds (e.g., Watanabe-Kuzuoka-Tan, Yassaee-Aref-Gohari and Li-Anantharam). We also give second-order achievability bounds for lossy compression where side information may be absent (Heegard-Berger problem) and channels with noncausal state information at the encoder and cost constraint (Gelfand-Pinsker problem with cost) that improve upon previous bounds.