Joint Learning of Energy-based Models and their Partition Function

📅 2025-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Training energy-based models (EBMs) over combinatorial discrete spaces—such as sets and permutations—is hindered by the intractability of the partition function, rendering exact maximum likelihood estimation (MLE) infeasible. To address this, we propose the first neural framework that jointly parameterizes both the energy function and the log-partition function. We theoretically prove that this framework converges to the optimal MLE solution. Moreover, we extend the sparsemax loss to large-scale combinatorial domains for the first time, enabling fully differentiable, end-to-end stochastic gradient optimization without Markov chain Monte Carlo (MCMC) sampling. Our method bypasses traditional bottlenecks—including explicit sampling and approximate partition function estimation—while enabling accurate log-partition value prediction for unseen instances. We validate its efficiency and superiority on multi-label classification and label ranking tasks, establishing a new paradigm for EBM training that is differentiable, scalable, and sampling-free.

Technology Category

Application Category

📝 Abstract
Energy-based models (EBMs) offer a flexible framework for parameterizing probability distributions using neural networks. However, learning EBMs by exact maximum likelihood estimation (MLE) is generally intractable, due to the need to compute the partition function (normalization constant). In this paper, we propose a novel formulation for approximately learning probabilistic EBMs in combinatorially-large discrete spaces, such as sets or permutations. Our key idea is to jointly learn both an energy model and its log-partition, both parameterized as a neural network. Our approach not only provides a novel tractable objective criterion to learn EBMs by stochastic gradient descent (without relying on MCMC), but also a novel means to estimate the log-partition function on unseen data points. On the theoretical side, we show that our approach recovers the optimal MLE solution when optimizing in the space of continuous functions. Furthermore, we show that our approach naturally extends to the broader family of Fenchel-Young losses, allowing us to obtain the first tractable method for optimizing the sparsemax loss in combinatorially-large spaces. We demonstrate our approach on multilabel classification and label ranking.
Problem

Research questions and friction points this paper is trying to address.

Discrete Spaces
Energy-Based Models
Partition Function Approximation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Energy-Based Models
Log-Partition Function Approximation
Sparsemax Loss Function
🔎 Similar Papers
No similar papers found.