Lossy Common Information in a Learnable Gray-Wyner Network

๐Ÿ“… 2026-01-29
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the substantial redundancy in multi-task computer vision, where conventional codecs struggle to effectively disentangle shared and task-specific information. For the first time, the Grayโ€“Wyner information-theoretic framework is integrated into an end-to-end learnable codec architecture, yielding a lossy common information modeling approach tailored for multi-task visual representation. The proposed method employs a three-branch network to explicitly decouple shared and task-exclusive content and introduces an optimization objective grounded in lossy common information theory. Evaluated under dual-task settings across six visual benchmarks, the approach consistently outperforms independent encoding schemes while significantly reducing representational redundancy, thereby demonstrating the practical relevance of classical information theory in modern representation learning.

Technology Category

Application Category

๐Ÿ“ Abstract
Many computer vision tasks share substantial overlapping information, yet conventional codecs tend to ignore this, leading to redundant and inefficient representations. The Gray-Wyner network, a classical concept from information theory, offers a principled framework for separating common and task-specific information. Inspired by this idea, we develop a learnable three-channel codec that disentangles shared information from task-specific details across multiple vision tasks. We characterize the limits of this approach through the notion of lossy common information, and propose an optimization objective that balances inherent tradeoffs in learning such representations. Through comparisons of three codec architectures on two-task scenarios spanning six vision benchmarks, we demonstrate that our approach substantially reduces redundancy and consistently outperforms independent coding. These results highlight the practical value of revisiting Gray-Wyner theory in modern machine learning contexts, bridging classic information theory with task-driven representation learning.
Problem

Research questions and friction points this paper is trying to address.

lossy common information
Gray-Wyner network
representation redundancy
multi-task vision
information disentanglement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gray-Wyner network
lossy common information
learnable codec
representation disentanglement
multi-task vision
๐Ÿ”Ž Similar Papers
No similar papers found.
Anderson de Andrade
Anderson de Andrade
Simon Fraser University
Machine LearningSignal Processing
A
Alon Harell
School of Engineering Science, Simon Fraser University, Burnaby, BC, Canada
I
Ivan V. Bajiฤ‡
School of Engineering Science, Simon Fraser University, Burnaby, BC, Canada