🤖 AI Summary
Conventional variational learning frameworks suffer from limited modeling capacity and struggle to simultaneously ensure sparsity and robustness. Method: We propose a novel variational learning paradigm based on the Fenchel–Young (FY) loss, interpreting it as a generalized divergence to unify empirical risk and regularization. This yields the FY free energy, evidence lower bound (ELBO), and corresponding posterior distribution. Building upon this, we develop FYEM and FYVAE—algorithms that replace the KL divergence in the E-step with an adaptive sparsity-inducing mechanism, enabling sparse observations and sparse posterior inference while lifting distributional family constraints. The framework integrates FY theory, variational inference, and alternating optimization, ensuring both theoretical rigor and computational tractability. Contribution/Results: Experiments demonstrate that our methods significantly outperform classical EM and VAE baselines across multiple tasks, achieving superior robustness, enhanced sparse modeling capability, and qualitatively new properties.
📝 Abstract
From a variational perspective, many statistical learning criteria involve seeking a distribution that balances empirical risk and regularization. In this paper, we broaden this perspective by introducing a new general class of variational methods based on Fenchel-Young (FY) losses, treated as divergences that generalize (and encompass) the familiar Kullback-Leibler divergence at the core of classical variational learning. Our proposed formulation -- FY variational learning -- includes as key ingredients new notions of FY free energy, FY evidence, FY evidence lower bound, and FY posterior. We derive alternating minimization and gradient backpropagation algorithms to compute (or lower bound) the FY evidence, which enables learning a wider class of models than previous variational formulations. This leads to generalized FY variants of classical algorithms, such as an FY expectation-maximization (FYEM) algorithm, and latent-variable models, such as an FY variational autoencoder (FYVAE). Our new methods are shown to be empirically competitive, often outperforming their classical counterparts, and most importantly, to have qualitatively novel features. For example, FYEM has an adaptively sparse E-step, while the FYVAE can support models with sparse observations and sparse posteriors.