🤖 AI Summary
This paper addresses the quantitative relationship between mutual information and two classical notions of lossy common information—Wyner-type and Gács–Körner-type—for a pair of target random variables in the Gray–Wyner network.
Method: By systematically incorporating mutual information into the boundary analysis of lossy common information, the authors derive tight theoretical bounds.
Contribution/Results: The work establishes, for the first time, that mutual information serves as the exact lower bound for Wyner’s lossy common information and the exact upper bound for Gács–Körner’s lossy common information. This generalizes Wyner’s seminal 1975 result from the lossless to the lossy regime, thereby bridging a long-standing theoretical gap between the two paradigms. The framework unifies these historically parallel notions of common information within information theory and provides a foundational basis for quantifying redundancy in lossy collaborative coding and distributed learning.
📝 Abstract
We show the mutual information between the targets in a Gray-Wyner Network as a bound that separates Wyner's lossy common information and Gács-Körner lossy common information. The results are a generalization of the lossless case presented by Wyner (1975).