MIST: Mutual Information Via Supervised Training

📅 2025-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses key limitations of conventional mutual information (MI) estimators—poor generalization, computational inefficiency, and inability to quantify estimation uncertainty. We propose a fully data-driven neural estimator designed to overcome these challenges. Methodologically, we introduce an end-to-end differentiable neural architecture incorporating a 2D permutation-invariant attention mechanism to model joint distributions, employ quantile regression to produce calibrated uncertainty intervals, and leverage normalizing flows to synthesize a diverse, multimodal, multiscale meta-dataset for fully supervised training. Compared to classical and state-of-the-art neural MI estimators, our approach achieves significantly higher estimation accuracy across varying sample sizes and high-dimensional settings, accelerates inference by several orders of magnitude, yields more reliable confidence intervals, and natively integrates into end-to-end learning pipelines.

Technology Category

Application Category

📝 Abstract
We propose a fully data-driven approach to designing mutual information (MI) estimators. Since any MI estimator is a function of the observed sample from two random variables, we parameterize this function with a neural network (MIST) and train it end-to-end to predict MI values. Training is performed on a large meta-dataset of 625,000 synthetic joint distributions with known ground-truth MI. To handle variable sample sizes and dimensions, we employ a two-dimensional attention scheme ensuring permutation invariance across input samples. To quantify uncertainty, we optimize a quantile regression loss, enabling the estimator to approximate the sampling distribution of MI rather than return a single point estimate. This research program departs from prior work by taking a fully empirical route, trading universal theoretical guarantees for flexibility and efficiency. Empirically, the learned estimators largely outperform classical baselines across sample sizes and dimensions, including on joint distributions unseen during training. The resulting quantile-based intervals are well-calibrated and more reliable than bootstrap-based confidence intervals, while inference is orders of magnitude faster than existing neural baselines. Beyond immediate empirical gains, this framework yields trainable, fully differentiable estimators that can be embedded into larger learning pipelines. Moreover, exploiting MI's invariance to invertible transformations, meta-datasets can be adapted to arbitrary data modalities via normalizing flows, enabling flexible training for diverse target meta-distributions.
Problem

Research questions and friction points this paper is trying to address.

Designing fully data-driven neural mutual information estimators via supervised training
Handling variable sample sizes and dimensions with permutation-invariant attention mechanisms
Quantifying uncertainty through quantile regression for reliable confidence intervals
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neural network parameterizes mutual information function
Attention mechanism handles variable sample sizes
Quantile regression enables uncertainty estimation
🔎 Similar Papers
2024-08-13arXiv.orgCitations: 2