🤖 AI Summary
Existing partial information decomposition (PID) methods for continuous, high-dimensional, multimodal settings rely on pairwise probability estimation, resulting in prohibitive computational cost and insufficient accuracy. To address this, we propose Gaussian Partial Information Decomposition (GPID): a framework that maps arbitrary joint distributions to pairwise Gaussian variables, with theoretical guarantees of optimality under the joint Gaussian assumption. We design a gradient-based differentiable optimization algorithm that jointly learns an information-preserving encoder, normalizing flows, and variational inference modules, yielding an end-to-end trainable PID estimator. GPID is the first method to enable efficient, accurate, and fully differentiable PID estimation in continuous, high-dimensional multimodal settings. Extensive experiments on synthetic data and multimodal benchmarks demonstrate substantial improvements over state-of-the-art approaches. Moreover, GPID supports principled model selection and facilitates interpretable analysis of shared, unique, and synergistic information across modalities.
📝 Abstract
The study of multimodality has garnered significant interest in fields where the analysis of interactions among multiple information sources can enhance predictive modeling, data fusion, and interpretability. Partial information decomposition (PID) has emerged as a useful information-theoretic framework to quantify the degree to which individual modalities independently, redundantly, or synergistically convey information about a target variable. However, existing PID methods depend on optimizing over a joint distribution constrained by estimated pairwise probability distributions, which are costly and inaccurate for continuous and high-dimensional modalities. Our first key insight is that the problem can be solved efficiently when the pairwise distributions are multivariate Gaussians, and we refer to this problem as Gaussian PID (GPID). We propose a new gradient-based algorithm that substantially improves the computational efficiency of GPID based on an alternative formulation of the underlying optimization problem. To generalize the applicability to non-Gaussian data, we learn information-preserving encoders to transform random variables of arbitrary input distributions into pairwise Gaussian random variables. Along the way, we resolved an open problem regarding the optimality of joint Gaussian solutions for GPID. Empirical validation in diverse synthetic examples demonstrates that our proposed method provides more accurate and efficient PID estimates than existing baselines. We further evaluate a series of large-scale multimodal benchmarks to show its utility in real-world applications of quantifying PID in multimodal datasets and selecting high-performing models.