🤖 AI Summary
This paper investigates error exponent optimization in source coding with side information. To overcome the computational complexity and limited universality of conventional approaches that optimize directly over the original source distribution, we propose the novel “dual-domain pruning” method: it reformulates the optimization problem in the dual domain and leverages tools including random coding ensembles and graph decomposition lemmas to circumvent strong structural assumptions on the source distribution. The method applies broadly—to general (countably infinite) alphabets and memory sources—and yields two new pruning-based error exponents. In the absence of side information, these exponents reduce to the optimal source coding error exponent, coinciding precisely with the Csiszár–Körner exponent. Numerical experiments confirm both analytical tractability and superior performance of the proposed framework.
📝 Abstract
We introduce an expurgation method for source coding with side information that enables direct dual-domain derivations of expurgated error exponents. Dual-domain methods yield optimization problems over few parameters, with any sub-optimal choice resulting in an achievable exponent, as opposed to primal-domain optimization over distributions. In addition, dual-domain methods naturally allow for general alphabets and/or memory. We derive two such expurgated error exponents for different random-coding ensembles. We show the better of the exponents coincides with the Csiszár-Körner exponent obtained via a graph decomposition lemma. We show some numerical examples that illustrate the differences between the two exponents and show that in the case of source coding without side information, the expurgated exponent coincides with the error exponent of the source optimal code.