Partial Information Decomposition via Normalizing Flows in Latent Gaussian Distributions

📅 2025-10-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing partial information decomposition (PID) methods for continuous, high-dimensional, multimodal settings rely on pairwise probability estimation, resulting in prohibitive computational cost and insufficient accuracy. To address this, we propose Gaussian Partial Information Decomposition (GPID): a framework that maps arbitrary joint distributions to pairwise Gaussian variables, with theoretical guarantees of optimality under the joint Gaussian assumption. We design a gradient-based differentiable optimization algorithm that jointly learns an information-preserving encoder, normalizing flows, and variational inference modules, yielding an end-to-end trainable PID estimator. GPID is the first method to enable efficient, accurate, and fully differentiable PID estimation in continuous, high-dimensional multimodal settings. Extensive experiments on synthetic data and multimodal benchmarks demonstrate substantial improvements over state-of-the-art approaches. Moreover, GPID supports principled model selection and facilitates interpretable analysis of shared, unique, and synergistic information across modalities.

Technology Category

Application Category

📝 Abstract
The study of multimodality has garnered significant interest in fields where the analysis of interactions among multiple information sources can enhance predictive modeling, data fusion, and interpretability. Partial information decomposition (PID) has emerged as a useful information-theoretic framework to quantify the degree to which individual modalities independently, redundantly, or synergistically convey information about a target variable. However, existing PID methods depend on optimizing over a joint distribution constrained by estimated pairwise probability distributions, which are costly and inaccurate for continuous and high-dimensional modalities. Our first key insight is that the problem can be solved efficiently when the pairwise distributions are multivariate Gaussians, and we refer to this problem as Gaussian PID (GPID). We propose a new gradient-based algorithm that substantially improves the computational efficiency of GPID based on an alternative formulation of the underlying optimization problem. To generalize the applicability to non-Gaussian data, we learn information-preserving encoders to transform random variables of arbitrary input distributions into pairwise Gaussian random variables. Along the way, we resolved an open problem regarding the optimality of joint Gaussian solutions for GPID. Empirical validation in diverse synthetic examples demonstrates that our proposed method provides more accurate and efficient PID estimates than existing baselines. We further evaluate a series of large-scale multimodal benchmarks to show its utility in real-world applications of quantifying PID in multimodal datasets and selecting high-performing models.
Problem

Research questions and friction points this paper is trying to address.

Quantifying multimodal information interactions via Gaussian distributions
Improving computational efficiency of partial information decomposition
Generalizing Gaussian PID to non-Gaussian data through encoders
Innovation

Methods, ideas, or system contributions that make the work stand out.

Normalizing flows transform non-Gaussian data to Gaussian
Gradient-based algorithm optimizes Gaussian partial information decomposition
Information-preserving encoders enable multimodal PID for arbitrary distributions
🔎 Similar Papers
No similar papers found.
W
Wenyuan Zhao
Texas A&M University
A
Adithya Balachandran
Massachusetts Institute of Technology
Chao Tian
Chao Tian
Department of Electrical and Computer Engineering, Texas A&M University
information theorymachine learningdata storageoptimization
P
Paul Pu Liang
Massachusetts Institute of Technology