Mutual Information Bounds for Lossy Common Information

📅 2025-07-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the quantitative relationship between mutual information and two classical notions of lossy common information—Wyner-type and Gács–Körner-type—for a pair of target random variables in the Gray–Wyner network. Method: By systematically incorporating mutual information into the boundary analysis of lossy common information, the authors derive tight theoretical bounds. Contribution/Results: The work establishes, for the first time, that mutual information serves as the exact lower bound for Wyner’s lossy common information and the exact upper bound for Gács–Körner’s lossy common information. This generalizes Wyner’s seminal 1975 result from the lossless to the lossy regime, thereby bridging a long-standing theoretical gap between the two paradigms. The framework unifies these historically parallel notions of common information within information theory and provides a foundational basis for quantifying redundancy in lossy collaborative coding and distributed learning.

Technology Category

Application Category

📝 Abstract
We show the mutual information between the targets in a Gray-Wyner Network as a bound that separates Wyner's lossy common information and Gács-Körner lossy common information. The results are a generalization of the lossless case presented by Wyner (1975).
Problem

Research questions and friction points this paper is trying to address.

Bounds mutual information in Gray-Wyner Network
Separates Wyner's and Gács-Körner lossy common information
Generalizes Wyner's 1975 lossless case results
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mutual information bounds in Gray-Wyner Network
Generalizes Wyner's lossless case results
Separates two types of lossy common information
🔎 Similar Papers
No similar papers found.