Connecting Jensen-Shannon and Kullback-Leibler Divergences: A New Bound for Representation Learning

📅 2025-10-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Accurate estimation of mutual information (MI) remains challenging in representation learning due to the intractability of KL divergence (KLD) between joint and product-of-marginals distributions. Method: We propose a novel, computationally tractable lower bound on KLD based on the Jensen–Shannon divergence (JSD), derived within a variational discrimination framework. This bound is implemented via binary cross-entropy loss and efficiently estimated using neural networks, ensuring low variance and high training stability. Contribution/Results: Our key theoretical contribution is the first general, tight lower bound linking JSD to KLD—rigorously justifying JSD as a principled surrogate for MI. Under the information bottleneck paradigm, our estimator consistently outperforms state-of-the-art neural MI estimators across multiple benchmarks, yielding tighter bounds and more robust optimization. This work unifies discriminative learning objectives with MI maximization, establishing a solid theoretical foundation and strong empirical validation for JSD-based representation learning.

Technology Category

Application Category

📝 Abstract
Mutual Information (MI) is a fundamental measure of statistical dependence widely used in representation learning. While direct optimization of MI via its definition as a Kullback-Leibler divergence (KLD) is often intractable, many recent methods have instead maximized alternative dependence measures, most notably, the Jensen-Shannon divergence (JSD) between joint and product of marginal distributions via discriminative losses. However, the connection between these surrogate objectives and MI remains poorly understood. In this work, we bridge this gap by deriving a new, tight, and tractable lower bound on KLD as a function of JSD in the general case. By specializing this bound to joint and marginal distributions, we demonstrate that maximizing the JSD-based information increases a guaranteed lower bound on mutual information. Furthermore, we revisit the practical implementation of JSD-based objectives and observe that minimizing the cross-entropy loss of a binary classifier trained to distinguish joint from marginal pairs recovers a known variational lower bound on the JSD. Extensive experiments demonstrate that our lower bound is tight when applied to MI estimation. We compared our lower bound to state-of-the-art neural estimators of variational lower bound across a range of established reference scenarios. Our lower bound estimator consistently provides a stable, low-variance estimate of a tight lower bound on MI. We also demonstrate its practical usefulness in the context of the Information Bottleneck framework. Taken together, our results provide new theoretical justifications and strong empirical evidence for using discriminative learning in MI-based representation learning.
Problem

Research questions and friction points this paper is trying to address.

Establishes a tight lower bound between Jensen-Shannon and Kullback-Leibler divergences
Connects surrogate JSD objectives to mutual information for representation learning
Provides theoretical justification for discriminative learning in information-based frameworks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Derived tight lower bound on KLD via JSD
Maximizing JSD increases guaranteed MI lower bound
Implemented binary classifier for variational JSD estimation
🔎 Similar Papers
No similar papers found.