Fenchel-Young Variational Learning

📅 2025-02-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Conventional variational learning frameworks suffer from limited modeling capacity and struggle to simultaneously ensure sparsity and robustness. Method: We propose a novel variational learning paradigm based on the Fenchel–Young (FY) loss, interpreting it as a generalized divergence to unify empirical risk and regularization. This yields the FY free energy, evidence lower bound (ELBO), and corresponding posterior distribution. Building upon this, we develop FYEM and FYVAE—algorithms that replace the KL divergence in the E-step with an adaptive sparsity-inducing mechanism, enabling sparse observations and sparse posterior inference while lifting distributional family constraints. The framework integrates FY theory, variational inference, and alternating optimization, ensuring both theoretical rigor and computational tractability. Contribution/Results: Experiments demonstrate that our methods significantly outperform classical EM and VAE baselines across multiple tasks, achieving superior robustness, enhanced sparse modeling capability, and qualitatively new properties.

Technology Category

Application Category

📝 Abstract
From a variational perspective, many statistical learning criteria involve seeking a distribution that balances empirical risk and regularization. In this paper, we broaden this perspective by introducing a new general class of variational methods based on Fenchel-Young (FY) losses, treated as divergences that generalize (and encompass) the familiar Kullback-Leibler divergence at the core of classical variational learning. Our proposed formulation -- FY variational learning -- includes as key ingredients new notions of FY free energy, FY evidence, FY evidence lower bound, and FY posterior. We derive alternating minimization and gradient backpropagation algorithms to compute (or lower bound) the FY evidence, which enables learning a wider class of models than previous variational formulations. This leads to generalized FY variants of classical algorithms, such as an FY expectation-maximization (FYEM) algorithm, and latent-variable models, such as an FY variational autoencoder (FYVAE). Our new methods are shown to be empirically competitive, often outperforming their classical counterparts, and most importantly, to have qualitatively novel features. For example, FYEM has an adaptively sparse E-step, while the FYVAE can support models with sparse observations and sparse posteriors.
Problem

Research questions and friction points this paper is trying to address.

Introduces Fenchel-Young variational learning
Generalizes Kullback-Leibler divergence
Develops FY expectation-maximization algorithm
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fenchel-Young losses-based variational methods
Alternating minimization and gradient backpropagation
Generalized FY variants of classical algorithms
🔎 Similar Papers
No similar papers found.
S
Sophia Sklaviadis
Instituto de Telecomunicacoes, Lisbon, Portugal; Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal
Sweta Agrawal
Sweta Agrawal
Research Scientist at Google
Machine TranslationNatural Language Generation and Evaluation
António Farinhas
António Farinhas
Sword Health
Machine LearningNatural Language Processing
A
Andre Martins
Instituto de Telecomunicacoes, Lisbon, Portugal; Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal; Unbabel, Lisbon, Portugal; Lisbon ELLIS Unit (LUMLIS), Lisbon, Portugal
M
Mario Figueiredo
Instituto de Telecomunicacoes, Lisbon, Portugal; Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal; Lisbon ELLIS Unit (LUMLIS), Lisbon, Portugal